JavaScript is Assembly Language for the Web: Part 2 - Madness or just Insanity?
UPDATE: Check out the Podcast I did with Erik Meijer on Hanselminutes this week on this very topic: JavaScript is Assembly Language for the Web: Semantic Markup is Dead! Clean vs. Machine-coded HTML.
Some folks think that saying "JavaScript is Assembly Language for the Web" is a totally insane statement. So, I asked a few JavaScript gurus like Brendan Eich (the inventor of JavaScript) and Douglas Crockford (inventor of JSON) and Mike Shaver (Technical VP at Mozilla). Here is our private email thread, with their permission.
I've heard the comparison before, and I think it's mostly right. It ignores the amount of effort put into JS's developer ergonomics, though, since assembly is not designed to have a humane syntax (especially modern assembly).
I said "JS is the x86 of the web" a couple of years ago [likely at JSConf], but I can't claim it's original. [Nick Thompson said it on Hacker News this year as well.]
The point is JS is about as low as we can go. But it also has higher-level facilities.
Shaver's right, assembly without a great macro processor is not good for programmers or safety. JS is. So the analogy needs some qualification or it becomes silly.
The mix of high-level functional programming and memory safety, with low-level facilities such as typed arrays and the forthcoming ES.next extension of typed arrays, binary data, make for a more powerful programming language than assembly, and of course memory safety is the first differentiator.
I think it is a little closer to the mark to say that JavaScript is the VM of the web. We had always thought that Java's JVM would be the VM of the web, but it turns out that it's JavaScript.
JavaScript's parser does a more efficient job of providing code security than the JVM's bytecode verifier. JavaScript did a better job of keeping the write once, run everywhere promise, perhaps because it works at a higher level, avoiding low level edge cases. And then Turing takes care of the rest of it.
There are certainly a lot of people who refuse to consider the possibility that JavaScript got anything right. I used to be one of those guys. But now I continue to be amazed by the brilliance that is in there.
Brendan Eich, again:
Doug's point about source beating bytecode is good. My friend Prof. Michael Franz of UC Irvine long ago showed O(n^4) complexity (runaway compute cycles, denial of service) in the Java verifier. JS is strictly more portable and fast enough to lex/parse as minified source.
Source as "bytecode" also avoids the big stupid Java bytecode mistake: freezing a poorly designed lowered form of Java, then being unable to evolve the high-form source, i.e., the Java programming language for fear of breaking Java bytecode compatibility. This severely messed up the design of inner classes and then generics in Java -- and then Sun broke bytecode compat anyway!
From a YCombinator Thread a while back, Nick Thompson said:
My admittedly biased view: I spent two years of my life trying to make the JVM communicate gracefully with Javascript - there were plenty of us at Netscape who thought that bytecode was a better foundation for mobile code. But Sun made it very difficult, building their complete bloated software stack from scratch. They didn't want Java to cooperate with anything else, let alone make it embeddable into another piece of software. They wrote their string handling code in an interpreted language rather than taint themselves with C! As far as I can tell, Sun viewed Netscape - Java's only significant customer at the time - as a mere vector for their Windows replacement fantasies. Anybody who actually tried to use Java would just have to suffer.
Meanwhile Brendan was doing the work of ten engineers and three customer support people, and paying attention to things that mattered to web authors, like mixing JS code into HTML, instant loading, integration with the rest of the browser, and working with other browser vendors to make JS an open standard.
So now JS is the x86 assembler of the web - not as pretty as it might be, but it gets the job done (GWT is the most hilarious case in point). It would be a classic case of worse is better except that Java only looked better from the bottom up. Meanwhile JS turned out to be pretty awesome. Good luck trying to displace it.
The point is, of course, that no analogy is perfect. Of course JavaScript as a language doesn't look or act like ASM. But as an analogy, it holds up.
- JavaScript is ubiquitous.
- It's fast and getting faster.
- Javascript is as low-level as a web programming language goes.
- You can craft it manually or you can target it by compiling from another language.
This topic comes up on Hacker News often.
- "The JavaScript we've got now is the assembly language of the client-side. We can't easily change it, but we have to start building better tools on top of it." - jonnycat
Have at it. I enjoy our thoughtful, measured and reasoned discussions, Dear Reader. You guys rock.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
About Newsletter
As for the last post, I think it was poorly written and that's my fault.
Indeed mapping a language with novel semantics to JS requires more elaborate compilation and runtime support, and that can be a limiting factor. An example would be Continuation Passing Style conversion of functions using yield in a version of JS with generators (https://developer.mozilla.org/en/new_in_javascript_1.7, to be included in JS's next standard: http://wiki.ecmascript.org/doku.php?id=harmony:generators). Such a compiler has to turn generator functions into spaghetti coded state machines.
This is doable but it takes more work and it may cost more on the target JS VM. So JavaScript language evolution continues, in order to fill semantic gaps. And the smart choice of CoffeeScript to be "just syntax" means it will continue to work well, and it can be extended to map new JS semantics with Coffee-esque syntax, as appropriate.
/be
JS will remain hand-coded as well as generated for the foreseeable future. That's part of what helped it win too: people could view source (better done on github.com now, but still usable with http://jsbeautifier.org/).
/be
JavaScript can definitely be generated by compilers, but since size and speed still matter, humans can generally do much better job. After all, writing JavaScript is not that much more difficult than writing c#. It is not like we need to create binary code.
It seems like when you have an application as large and complex as a facebook or google+ the problems with the JavaScript language become bigger issues. Using a higer level classical object oriented langauge with the benefits of type-safety are probably worth the trade off in increased complexity.
On the other hand, a medium sized site that can be maintained by hand full of devs probably is not.
Believe it or not, classical OOP languages -- especially ones with not-very-expressive static type systems -- are not the only way to build large apps. It can be done in JS, but it's harder than it ought to be. This too is driving JS language evolution in Ecma TC39, where I spend a lot of my time (es-discuss at mozilla.org is the unofficial discussion list).
/be
There is still a lot missing, in the language or the browser runtime.
It is a cross-platform or rather cross-browser platform language for building website/applications, but that's as far the analogy goes.
Yes you can build a word processor in javascript, but at the end of the day, the products are not in the same class as the native.
Javascript and webapps enables the instant running without installing, and that's a very big plus, but is it enough. Where at a cross-road here, just as we were at the beginning of the 90s.
Marcel
I wish the ECMA specs would require that all built-in object/property labels be over-writable and exposed by for-in loops. That makes it a lot easier to kick the tires of new browsers and correct issues directly when certain vendors settle for "close enough" on their stalled version of the W3C DOM API spec for 10 years.
In spite of this, JavaScript is extremely malleable and it's that desire for freedom in my expanding repertoire that makes me want to know what's at the core of it all. The more I learn, the more I dislike languages that force a given paradigm on you. Static typing? No problem. Everything must be class-based? Uh... As an outside observer of the phenomenon who has occasionally been forced to work around some exceedingly inflexible Java, I can't help but notice that when you insist that everything is OOP, what tends to happen is that nothing is and you end up with is a whole lot of ungainly procedural code wearing class tutus. I'm not adverse to class-based approaches when they work well and when dealing data-intensive scenarios on the front end, they can certainly come in handy.
When the easily emulated classical OOP paradigm is less ideal, however, I am deeply grateful for the ability to throw object literals and functions with baked-in declaration contexts around like they're candy with a node-based markup language and the API for plugging into it being all the 'real-world' structure needed to bring it all together in a sensible fashion.
jQuery and to a lesser extent other JS frameworks are responsible for this. jQuery isn't a different language. It's a framework. It's the .NET for C#. It's the rails for ruby. It *inspired* a generation of web developers to start caring about Javascript. Even using jQuery at the most basic level makes you learn about fundamental Javascript concepts. Using a callback? You just learned about anonymous functions and the concept of functions as first class objects. Using "each"? You just learned about "this" context. These are among the most difficult things for programmers of other languages to understand when learning Javascript.
While it is possible to get some use out of jQuery without understanding the details of Javascript syntax, it's not a different language, and it's a starting point for many people who get a taste of what they can do and decide they want to learn more. It's not watering down Javascript at all, it's exposing it to millions of people who would never have been exposed.
I believe we are at a turning point in web development. Using JQuery/Javascript, AJAX, CSS3, & HTML5 we can finally move functionality which belongs in the client to the client. I still see myself writing server side code, but not to compensate for my lack of client-side skills.
Finally, I agree with the previous posting: JQuery has given me the opportunity to appreciate Javascript.
I did this mock up of Space Invaders around five years ago (and then quickly found it wasn't remotely original, lots of people had already done complete "coin-op conversions" in JS).
The thing that really gets me is the actual working JS emulation of enough x86 instructions to boot up Linux in the browser. Still half suspecting it to be a hoax!
Back to "the assembler of the web" - assembler is notoriously non-portable, whereas JS is surprisingly portable despite what people say.
The real analogy is with C, sometimes called the "portable assembler language".
It has long served as a low-level lingua franca that can be targeted by compilers. Stroustrup's original C++ implementation produced C as its output, as did Eiffel. So JS is the "C of the web".
I'd say that's an easily defensible statement, but saying that anyone who doesn't like ViewState is ignorant was pretty far beyond the pale.
I like the idea of JavaScript as a assembly language. if XAML plus C# can be compiled to HTML5/JS, that will be perfect.
- Nowadays you don't code in assembly unless you need to perform expert optimisation, or use special instructions for tasks like video decompression (or you're just a masochist). C and JavaScript work equally well as both a target language and one for actual development.
- JavaScript wasn't intended as the zenith of programming language evolution, or the one developer platform to rule them all. Like C it was a product of its era, and the requirements and constraints that produced it still show in its design. Many people didn't, and still don't, like it, and there was an assumption that it would soon be replaced with a better, more perfect solution. Many people tried to create this replacement, and some of the things they developed became popular and useful, but they never achieved quite the same ubiquity, and it turned out that these supposedly perfect replacements had own problems of their own. Meanwhile, C and JavaScript stuck around, even as many of their supposed successors fell away, and people discovered a sort of beauty in their imperfection.
- Both C and JavaScript are highly cross-platform yet also platform-specific. You can find a C compiler for almost any platform, although you'll often be using a runtime that exposes platform-specific capabilities and extensions. In the same way, JavaScript is found on almost every computing device manufacturer today, sitting within a browser environment that specific capabilities and extensions.
The only thing to me that makes this analogy work is that you can't get any closer to the metal of the browser than with javascript.
http://www.ustream.tv/channel/clojurenyc
I say javascript's lack of native, static type safety is a downside for performance. Clearly there are code quality wins to be made with Closure's compiler that enables one to optionally declare type annotations and also allows Closure to recompile your javascript into better-performing javascript, but this is certainly no substitution for a true static-typing system built in to the language.
Significant performance benefits can be made simply because of the static assertion that a variable can only be of a certain type and will not change over its lifetime. Javascript engines have to be coded very carefully to correctly deal with dynamic type changes in an efficient manner which is the fundamental barrier to improving javascript execution performance beyond the current state of the art. I'm sure other execution engines will come along trying out new ideas, improving performance bit-by-bit here or there, but they simply will not be able to cross this barrier.
ViewState in ASP.NET is an example of such an abstraction - jQuery is another, but just because jQuery excels in DOM manipulation doesn't mean you should build your entire application in it, but certainly not without it.
More often than not I see ASP.NET developers lack a proper understanding of how ViewState works and to their chagrin find their applications bloated and slow in production albeit fast on their local machines.
Does that mean ViewState is a bad invention? Or is it merely a feat that must be undertaken to produce a well-crafted application ?
@Brendan: regarding CPS, the problem can be solved with callback patterns rather than state machines. I wrote a detailed post about it. It works well with preprocessors and could also be integrated into compilers but I don't know how efficient the result would be, compared to state machines.
Bruno
Even though static type information is not explicitly provided in the code, modern JavaScript engines can infer type information and use it to optimise code, falling back to slower paths only when required.
I would also like to mention the acronym JSNI. Javascript Native Interface. This is a GWT term for when you write a native java method that uses javascript instead of C or Assembly to perform low-level, browser-dependent functions. You are very right that the JVM is not a good engine to become the de facto web standard, and that is why it failed... But the java language itself is a perfect example of how to bind strongly-typed OOP concepts to a weaker typed base language like JS.
I strongly disagree the quote from Nick Thompson that denounces java because: "They wrote their string handling code in an interpreted language rather than taint themselves with C!". The very fact they kept core libraries wholly virtual is why GWT can use java to smooth out browser inconsistencies in JS to do amazing things, like Porting Quake II to run in a browser http://code.google.com/p/quake2-gwt-port/ . Making a virtual language dependent on C at it's core would just make it another C lib, and not a virtual language at all.
Would you be more impressed if you did a View Source and found that it was not only pretty on the outside but also inside?
Actually it depends on your definition of "nice". If you mean it semantically or HTML that actually is valid, then I agree.
If you mean "nice" in terms of readable, well-formatted code, but sacrificing user experience because of document size, no.
Not so long time ago, I would have agreed with both, but nowadays Developers tools like those in Chrome take care of beautifying "ugly" HTML and JS automagically.
Erik's Volta project (compiling MSIL to JavaScript) went public in December 2007. http://lambda-the-ultimate.org/node/2563
Comments are closed.