Last week (June 17th, 2015), Brendan Eich announced a new project to bring new low level primitives to the web — a move that will make it easier to compile projects written in languages like C & C++ to run in browsers and other JavaScript environments. If this is your first time hearing about it, read “What is WebAssembly” for a basic overview.

The WebAssembly team includes people from Google, Microsoft, Mozilla, Apple, and others under the banner of the W3C WebAssembly Community Group.

The announcement left the web development community speculating about how WebAssembly might impact the future of JavaScript. Brendan Eich fills in the gaps in an interview with Eric Elliott.

TL;DR: No Chicken Little, the sky is not falling.

All emphasis added.

EE: Recently, you announced on your blog that we’re getting something called WebAssembly. Basically an assembly language for the web, a low-level compile target. Can you tell us what that’s about, and what motivated you to work on it?

BE: Sure. This is in some ways just a continued process that started with ASM.js. ASM.js is a subset of JavaScript that has no objects and has no garbage collection or just in time compiler pauses. It’s a target for C/C++ — a statically typed subset of JavaScript. This was demonstrated at Mozilla in partnership with the big gaming companies like Epic Games with Unreal Engine and Unity with the Unity engine and IDE.

So that allowed a lot of games published with those engines — games out there in the market with Unreal Engine 3 and Unreal Engine 4, games built on Unity 5 to target JavaScript with just another instruction set.

These games would be possibly targeting PS4, XBox and PC. Now they could also be targeted at the web with WebGL and ASM.js, web audio and other APIs are important, gamepad APIs, full screen APIs all matter.

That was a big success story at Mozilla when I was there and they actually crossed over to Microsoft. I remember talking a couple years ago to Anders Hejlsberg and Steve Lucco. Anders of course who did C# and .NET and Steve who led the Chakra team — still does, I think. They were pretty enthusiastic about it, and so it came to pass that Edge supports ASM.js as an online optimized, whole module optimization just like Mozilla does, with good performance.

To put a not too fine a point on it, I think this was the last straw for Native Client, or really Portable Native Client (PNaCl) which was the only way Google was going to get their version of the compile to safe native story going for C & C++ in browsers, and that was never even fully enabled in Chrome. It was whitelisted for certain Google properties. I think Google+ had an image editor.

As time went on, the fully native client wasn’t going to cross to other browsers. It was a long road to do everything they intended to do. They were pretty ambitious. Whereas ASM.js started out looking like, oh, it doesn’t have threads, it doesn’t have locks, it doesn’t have a lot of things they need in native code, it doesn’t have SIMD (single instruction, multiple data), short vector intrinsics for vector units that are built into our modern CPUs.

Sure enough, JavaScript didn’t have those things but now it’s getting those things. SIMD is in Firefox nightly, it’s coming to V8, and Intel’s already done an implementation in their version of V8, Crosswalk, and Microsoft announced SIMD support coming to Chakra.

Assuming stasis on the web — it’s not a good assumption, I think that was the mistake that happened long ago with projects like Portable Native Client and Dart, too. They assumed JavaScript was just incompetent, it could not get any better and therefore they had to do something that amounted to a whole second system, or third system to be added to browsers, and yet they couldn’t get it fully into Chrome.

I’m not gloating here. Realism requires incrementalism. All browsers must move in smaller steps. This is really a place where Firefox and Chrome restored browser competition — Firefox first. Certainly there are alternative worlds, we saw one in the early 2000's with IE reaching 95% market share and Microsoft not investing in the web, instead investing in .NET and Windows Presentation Foundation, all that stuff.

Anyway to not go on about history, the continued evolution of ASM.js is wasm. The reason it’s important is because once ASM.js crossed over into other browsers, it became clear that it was gonna cross over into all the browsers.

Yet JavaScript and even ASM.js, a subset of JavaScript, is missing SIMD, threads, shared memory and other primitives. Even with ASM.js, you’re still parsing JavaScript. It’s a subset. You might run a dedicated parser for that subset, which is another maintenance chore because now you’ve got two parsers, one for the full JavaScript language and one for the subset.

You’re still faced with a costly parsing problem compared to what could be done with a more efficient, compressed abstract syntax tree (AST), which is what WebAssembly is aiming at.

People have been asking for bytecode on the web, thinking that they want Java bytecode. What I think the researchers at MSR and other places have shown is that they don’t want that — that bit back hard in several ways. AST encoding is much better.

At first, WebAssembly starts out just like ASM.js, but with a compressed syntax, that’s a binary syntax. But once all the browsers support both wasm and ASM.js, and after a decent interval of browser updates, then wasm can start to grow extra semantics that need not be put into JavaScript.

They may in fact be put into both JavaScript and wasm because it’s the same one engine (1vm), but there are certain things we might not want to ever put into JavaScript that could be put into wasm for the benefit of other languages like C++ or Haskell. There are lots of languages you might compile to wasm.

EE: Could you give us some examples of that?

BE: Sure, this is all written up in the GitHub but just arguing for shared memory array buffers to get multi-threaded games cross-compiled was a stress on JavaScript because for the longest time we didn’t want race conditions in JavaScript — it’s always a fragile process with bug hazards. That’s an example of something we might have just kept in WebAssembly if we had that choice.

Down the road there are things like that, like zero cost exceptions might not make sense in JavaScript. They require some compiler and runtime cleverness, but they do make sense for C++, and Swift.

Another example is call/cc (call-with-current-continuation). Call/cc is too powerful a tool. It has challenges for JavaScript engines in implementation and security hazards.

You have these non-local functional gotos. You can call a continuation and be off in a different stack. So it’s not like the local, limited continuations like we have in generators in ES6. It’s a deeper continuation. So call/cc could be put into wasm, into the engine that handles both wasm and JavaScript down the road. It’s conceivable. It won’t go to the JavaScript language but to the WebAssembly syntax in a way that won’t affect the legacy JavaScript world.

EE: Speaking of which, are there security implications to wasm that we need to think carefully about?

BE: As it starts as co-expressive with ASM.js, it doesn’t add any new security issues right out of the gate, but as it starts to diverge a year or two down the road, then yes, we need to look at the security properties of all those extensions. That’s an added challenge. That’s something that the native client folks at Google have been thinking of a lot. They’re helping and their knowledge in this area is helpful.

We’re trying to chase undefined behavior out of C++ and the LLVM compiler that PNaCl (Portable Native Client) uses, and Emscripten also uses. From the hardware up to the specification for C++, there’s a lot of undefined behavior, and that’s not good for security.

JavaScript has been a lot more prescriptive and is trying to be safer than a lot of other languages. For WebAssembly we need to go more to the JavaScript side.

EE: There was an interesting headline in the register. Are you trying to KILL JavaScript? Is that what this is all about?

BE: [laughs] No. I’m a pragmatist. I’m an old C/C++ hacker. This is all a big system that evolves. Humans trying things, but also facing real compatibility constraints on the web.

You don’t break the web, you don’t get to clean the slate and start over. Anybody who tries is going to fail. Even when you look at native apps on mobile devices, where there are fairly mature native languages and toolkits for user interface and graphics, there’s still a lot of web. There’s a lot of hybrid apps.

Facebook still uses web views. All the big apps — Amazon, Pinterest, and so on use web views. There’s a lot of web assets that would be completely insane to reinvent with native presentation layers instead of HTML. So the web is still pretty darn important. It’s also the highest monetization platform.

Smartphones have incredible penetration. There will be a smartphone for almost every adult on the planet within a few years, and that’s gonna be huge for many things, but that doesn’t really mean a smartphone’s a PC. It’s still more of a consumption device. It’s not like you’re sitting and typing and researching and shopping in depth.

So rather than kill JavaScript, which is not feasible, what I’m trying to do is respond to real engineering problems that we’ve had with ASM.js. Loading a big game from Epic or Unity can take 20 - 30 seconds. That’s too long. With a compressed abstract syntax tree encoding that’s 20 times faster, just a couple seconds, that’s what you want. So there’s a real reason for wasm, and it is a valid reason.

[Read the Unity Blog post to learn how WebAssembly could dramatically improve the gaming experience.]

Wasm helps JS win, it is a win not only for native code compilation. Eventually all the browsers and webviews will support wasm syntax to serve the compile target master and free JavaScript so it can serve the JavaScript master.

JavaScript has been in that house divided stage, and it never works in the long run. Like I said, even the shared array buffer extension, that’s a bit of a stretch for JavaScript, and having the ability down the road to let wasm do the exotic things that C++ wants and not needing to figure out ways to put those less-safe facilities into JavaScript is a great relief.

So, parsing performance, and not serving two masters, those are good reasons to do wasm. We’re not killing JavaScript. I don’t think it’s even possible to kill JavaScript.