So I’m a bit late for #Rust2018, but I want to explain where exactly my plans are for ggez over the next year or so and how I’m thinking of approaching them.

The State Of ggez

As far as ggez itself goes… as of version 0.4.x, it’s not done by any means, but it is 100% usable for basically everything I want it to be usable for, the API is quite nice to use, it’s capable of being reasonably fast, and there’s no serious bugs that I am aware of. It is quite close to what I consider a 1.0. Outstanding things that could stand some significant improvement are:

Overall though, I very much like ggez right where it is, and I think major enhancements are going to be the realm of a new library on top of it. However, that has not been my focus lately…

My focus has been on the Web.

email, email, checkin’ my email…

See, I grew up a child (well, teenager) of Newgrounds and Kongregate, which is the sort of experience that makes you both love and hate Flash in equal measure. Before Youtube, when you had 0.5 mbps DSL at best and dial-up at worst, if you wanted to watch a video or play a game on the web then Flash was your only option. People created vast and vibrant media cultures out of it, which in many ways spurred the development of the current indie gamedev community… Super Meat Boy, Castle Crashers and many other early successful indie titles started out as Flash games. I shed no tears over the death of Flash, ’cause it was a horrible proprietary VM platform owned by a media company that really didn’t want to be developing a VM platform at all, and so did it very poorly. But on the other hand I don’t know if it’s even possible to watch Strongbad videos anymore. I’ve watched the Web more and more become a platform for distributing applications (whether the content needs to be distributed as an application or not), until now it’s become incredibly powerful and all-pervasive. At the same time, I was kind of a programming language snob, and so refused to touch Javascript in any meaningful way. As JS grows and mutates it just gets more and more band-aids and never any fixes for any of the actual problems, most of which aren’t even hard problems to fix. (Don’t be a programming language snob kids, use the right tool for the right job.) So I sort of grew to accept that the back-end is where I belong and gave up on making front-end web anything.

Unity3D changed that in mid 2015, because they added the ability to compile to asm.js using emscripten. This promptly blew my damn mind, because you touched nothing but C# and a tiny HTML shim page from start to finish, and it worked. I was playing my Unity3D game in Firefox. I actually interviewed for a job to work on this compiler and got to see a tiny bit of it under the hood, and the concept is as bonkers as the implementation is slick: it compiled your game to .NET bytecode, parsed that and re-compiled the bytecode to C++, and then ran the emscripten C++ compiler on it to compile it to asm.js. It took like ten minutes to compile even a quite small game, but the fact that it worked at all was amazing, and it actually worked really well. So that shook my world a bit, and gave me hope: maybe we can someday have the Web without Javascript.

(I didn’t get the job, but it was still super cool.)

Now in 2018 the “web browser” is basically a media-centric application platform connected to a mostly-message-based RPC system. It has WebGL and audio API’s and so on, and you can go to a webpage and run Discord or Shadertoy and it works as if it’s no big deal. But that works only because there’s still this enormous morass of mostly hand-written Javascript code lurking somewhere with platform-dependent weirdnesses and untyped variables and other horrible nasty things deep inside it, hogging your memory, bogging your performance, and breaking with ReferenceError: Window is not defined whenever you look at it funny. (Okay, I’m still a bit of a language snob.) But now with Webassembly and JSON-RPC, the Web is rapidly becoming what the Java platform tried to be in the mid-90’s: an application platform based off of a secure virtual machine. Java failed at that goal ’cause it was created in the mid-90’s and nobody knew how to do what they wanted, but the Web has grown up to be the distributed operating system that people imagined. It’s way easier and less error-prone to compile things to Webassembly than to JS, it can run way faster, and it’s easier and easier to take something totally non Web-y like C# or Rust and run it in a browser.

Anyway, the point for this big digression is this: ggez was always supposed to work on the Web.

the futuuuuuuuure

And so since Jan 1st 2018, I’ve been working on figuring out how to make ggez work on the web. But, as I said… I never really learned much about the web frontend ecosystem, so I’ve had a lot of catching up to do. Following are summaries of the various concerns that ggez needs to address to work on the web.

Webassembly

So first I’m going to talk about the guts of WebAssembly and its ups and downs. To actually understand how all this works I built a simple Webassembly interpreter, which is still squarely in the “it doesn’t all work right” category but has enough done for me to understand more or less how it works.

Webassembly has a few specific design goals:

Safe. Cannot touch the host machine’s memory directly or make any function calls that the host implementation does not explicitly provide. Statically verifiable. Before executing Webassembly code it is supposed to be run through a verifier step that sanity-checks a whole lot of things ahead of time, so you can tell quite a lot about a wasm module and whether it will be subject to certain classes of errors without actually having to execute it. This happens once when the module is loaded, so it also lets you elide a fair number of runtime checks. Easily compiled. The people who designed this are the ones who write optimizing Javascript compilers for major web browsers, so they want to make their life simple and have something that’s nice to compile into efficient code.

The Webassembly Binary Toolkit contains lots of handy tools such as assembler, disassembler, standalone interpreter and test suite parser.

The test suite! Wasm comes with a compliance test suite. It’s not a theoretical body of work either; it’s an evolving, practical project that you can check out, submit pull requests to, tweak for your needs, etc. This is honestly fantastic for an implementer because if you can run your implementation on the test suite and it passes, you know you’re doing at least something right. I actually think this is a huge step forward in the process of defining standards that are meant to interoperate between different implementations; between a written specification and a test suite you can cover both the theory and practice of a standard, which gets you the best of both worlds. You have a single written definition of how a system should work, plus (lots of, thorough) demonstrations of known-correct behavior.

At first glance, webassembly doesn’t seem terribly novel. It’s a 32-bit stack machine with integer and floating point math operations, logic operations, all that good stuff. All instructions are stored as variable-length integers which is a bit odd but whatever, it assumes little-endian machines and wrapping integers, and has a few of the handier bit-twiddling operations such as “count leading zeroes” and “copy floating point sign” and such. The spec is actually quite nicely written, with both math and plain English definitions of all operations, and accounts for a number of rare edge cases like “are floating point exceptions allowed?” and such. (They’re not.)

There’s a few odd points, which mostly are there to make the language easier to verify. These things also make the language easier to compile into safe and fast code, since if you can prove early on that these invariants are true, then you don’t have to insert code to check for them at runtime.

Webassembly is block structured! Weird for an assembly language, no? You have if/else/end blocks, loop/end blocks, and a block/end structure that lets you make arbitrary-ish labels. Jumps are all relative to the current “scope”, and you can only jump out of the scope. So, if you are in a loop/end block and you say jump to label 0 then you jump to the beginning of the loop. If you have block loop ... end end and in the ... you say jump to label 0 , you jump to the beginning of the loop, and if you say jump to label 1 you jump to the end of the block. This means a few things: You can NOT jump to arbitrary instructions, and you can NOT jump into the middle of a loop or such. Maybe this restricts how compilers can optimize things a bit, but it also makes it way easier to prove your program is actually well-formed.

blocks, blocks, and a structure that lets you make arbitrary-ish labels. Jumps are all relative to the current “scope”, and you can only jump out of the scope. So, if you are in a block and you say then you jump to the beginning of the loop. If you have and in the you say , you jump to the beginning of the loop, and if you say you jump to the end of the block. This means a few things: You can NOT jump to arbitrary instructions, and you can NOT jump into the middle of a loop or such. Maybe this restricts how compilers can optimize things a bit, but it also makes it way easier to prove your program is actually well-formed. It’s also strongly-typed! Well, more or less. Values on the stack are strongly typed and can be u32 , u64 , f32 or f64 , and functions are strongly typed and specify what their signature is. All instructions and functions have a single signature; there are no variadic functions or generic operations. That said, memory is just an array of u8 ’s, pointers into it are plain u32 ’s, and that’s all you get. So it’s still possible to screw up type safety, but it takes a bit of work, and at least for basic math the verifier can prove what’s going in and what’s coming out.

, , or , and functions are strongly typed and specify what their signature is. All instructions and functions have a single signature; there are no variadic functions or generic operations. That said, memory is just an array of ’s, pointers into it are plain ’s, and that’s all you get. So it’s still possible to screw up type safety, but it takes a bit of work, and at least for basic math the verifier can prove what’s going in and what’s coming out. References to functions, global variables, etc. are all module-local. Even memory references! You can’t say “get me function 3” or “load address 0xF00F” to refer to another module’s values, those operations all refer to module-local values. You are getting function 3 of the local module, or loading address 0xF00F in the local module’s private memory space.

Modules may import values from other modules (functions, global variables, etc), if that module exports them. All imports are resolved in the order in which the modules are loaded though, and dangling imports are not allowed. This means you can’t have a circular dependency chain among modules.

It’s a stack-based language but different code paths are not allowed to leave variable numbers of values on the stack. You can’t have an if/else/end block where the if branch leaves one value on the stack and the else branch leaves two.

block where the branch leaves one value on the stack and the branch leaves two. A module may specify a “start” section which is run just after it is instantiated, which can either be the equivalent of a main function or can just run initialization code.

function or can just run initialization code. There is a “table” data type which can contain function references to functions of any arity, which I think is there mostly to provide a nice way to interface with Javascript code (since Javascript functions are kinda-sorta-untyped) (correction: It’s also useful for dynamic dispatch).

So I really think about a wasm module less as a standalone program and more like a DLL. It is a set of functions, a set of imported values, and a set of exported values. When a wasm interpreter loads multiple modules, it really is acting like a dynamic linker and linking them together, and a lot of the wasm-manipulating tools like wasm-bindgen and such perform this same operation. FORTUNATELY, since all inter-module references are by names, all jumps are local, and so on, it’s way easier to do this in webassembly than in real assembly. No need to grovel through instructions and modify them to use the right addresses if you don’t want to, you just give each module a look-up table that says “this module’s function #3 is global function #149”. Of course, optimizing interpreters can still do the groveling if they want to and avoid the extra lookup, but at that point you might as well just make a real compiler and do this as part of your compilation anyway.

There are a number of things that webassembly currently does NOT define, as of April 2018. People are hard at work hashing these things out, but it might take a while for them to firm up.

64-bit addresses

SIMD and other enhanced math stuff like fused-multiply-add

Threading

Actual API’s

Garbage collection

Exceptions

So, all in all, webassembly is nice. I am particularly hopeful about the idea of using it as a portable language for desktop programs: you can distribute one binary, it ahead-of-time compiles to native code when the user installs it or runs it for the first time (similar to how Android apps work these days), and Things Just Work. Of course you would need a portability layer for OS API’s… but that’s always the case. That’s why we have things like Rust’s std , Python’s standard library, GTK, Qt, and on and on.

It’s not all roses though, there’s some distinct problems with webassembly as it stands. In increasing order of severity (for me), webassembly is lacking:

Debugging symbols are currently undefined. Most people seem to be leaning towards using DWARF format debug info in a custom module section, but it’s still up in the air whether that will be the way things go. This isn’t actually terribly problematic yet, since webassembly code is fairly self-descriptive, but is pretty inconvenient.

64-bit address space. They decided to start with a 32-bit address space and figure out how to expand it to 64-bit later, which I think is the wrong way around. The justification is that you’re probably not going to want to make programs that use more than 4 GB of memory to start with, but I think that’s also wrong… Javascript and web browsers are such monsters that if you try to use more than 4 GB of memory it’s going to be basically unusable anyway, but the whole point of wasm is to make that less true. I also recently read a mailing list exchange where one of the original designers of the Alpha instruction set (I think???) was talking about the decisions that went into it, and one of the thing they mentioned was that it is far easier to design a 64-bit computer and chop it down to 32-bits then go the other way around. Sure, the webassembly team doesn’t have to design real hardware around these choices, but there’s fairly easy ways to make an instruction set with 64-bit pointers that can more or less transparently uses 32-bit pointers as well, if you set it up cleverly to support such things. Which they haven’t. I wish I could find the mailing list… was it linked from Raymond Chen’s series on Alpha AXP??? Anyway, wanna make a hardcore image or audio editor in wasm? What about a data-heavy server application? You might run out of headroom pretty quickly…

Feature detection. Wasm has no CPUID instruction, no version numbers attached to the instruction set itself, and no metadata in a module to describe what capabilities a host might provide that the module needs (a module can’t ask a host “do you support SIMD?”). There’s been a whole lot of discussion on the topic and as far as I can tell at the moment the current consensus on this seems to be “if a module uses features a host can’t provide, it won’t validate, so you can use that to probe what features a host has”. Then, theoretically, there will be a library of semi-standard stub modules that probe for specific features, and you can have a program fetch the stubs, attempt to validate them, and thus detect which features exist. So, it’s a hack that requires an external program to do its work, it will be slow, and you can very easily construct scenarios where validation passes but the results are incorrect – SIMD extension A defines instruction 0xFE to be ADD_F32x4 , SIMD extension B defines instruction 0xFE to be SUB_F32x4 , and boom, you’re hosed. This really pisses me off because some sort of probing functionality is absolutely necessary for futureproofing, it doesn’t seem to be actually hard to do, and there are many examples of both fairly successful efforts ( CPUID instructions, Vulkan), and unsuccessful ones and what they lead to (Javascript, OpenGL). And it’s not hard. grrrrr…

instruction, no version numbers attached to the instruction set itself, and no metadata in a module to describe what capabilities a host might provide that the module needs (a module can’t ask a host “do you support SIMD?”). There’s been a whole lot of discussion on the topic and as far as I can tell at the moment the current consensus on this seems to be “if a module uses features a host can’t provide, it won’t validate, so you can use that to probe what features a host has”. Then, theoretically, there will be a library of semi-standard stub modules that probe for specific features, and you can have a program fetch the stubs, attempt to validate them, and thus detect which features exist. So, it’s a hack that requires an external program to do its work, it will be slow, and you can very easily construct scenarios where validation passes but the results are incorrect – SIMD extension A defines instruction 0xFE to be , SIMD extension B defines instruction 0xFE to be , and boom, you’re hosed. This because some sort of probing functionality is absolutely necessary for futureproofing, it doesn’t seem to be actually hard to do, and there are many examples of both fairly successful efforts ( instructions, Vulkan), and unsuccessful ones and what they lead to (Javascript, OpenGL). And it’s not hard. grrrrr… API. As it stands, there is no standardized functionality for Webassembly to do anything apart from pure computation. It’s basically a CPU and RAM with no keyboard, no display, no network connection and no hard drive. If you want to, say, call an external function, then it’s up to the implementation to decide what functions it provides. You do not have println!() , you do not have fopen() , you do not have user input or HTML DOM access or Websockets or anything. Only two things exist: a Javascript API for loading and running Webassembly modules, and a Javascript API for passing data to and from these modules and calling functions via FFI. To some extent this is all you need, ’cause you can use it to build all that other stuff, but it also means that actually doing this FFI is a shitload of work that has to get handled on the wasm library side, every time, and will get handled different ways by different modules. Contrast with having a single high-quality implementation that the host provides and is standardized for every host… that would sure be nice. It will happen eventually, I hope! In the mean time though, we have to make do with libraries like stdweb and tools like wasm_bindgen to make life easier for us Rustaceans. This also means that there is no way for webassembly code to generate and run more webassembly code: modules cannot be modified once loaded, and there’s no (standard) way to tell the wasm implementation “pull this data out of my memory space and make it into a new module”. No pure wasm quines or dynamic compilers yet. :-(

Ok, so that’s what we’re dealing with, in all its glory and shame. On to discussing ggez itself.

Graphics

The biggest and most complicated part of ggez is without doubt its graphics engine, and it’s not going to be much of a game framework if it can’t display something. So first things first, I dug into the current version of gfx-rs, the Rust graphics library that ggez uses, and tried to figure out how to make it run in a browser. gfx takes a sorta-Vulkan-inspired API and runs it on whatever graphics “backend” you choose for it, so you can take mostly the same code and run it on OpenGL, DirectX, Metal or Vulkan. This means that it can also, in principle, run on WebGL quite transparently, since WebGL is not very different from OpenGL, and even if it were then you just need to write a new backend for it. People have gotten gfx working on WebGL in the past, in a basic form, but it’s a somewhat manual and error-prone process, not “compile it and go”.

Over the last year or so, gfx has also been going through a big shift to what they call “the HAL branch”, for Hardware Abstraction Layer: instead of presenting a safe-but-Vulkan-ish graphics API that can use any real graphics API as a backend (Metal, OpenGL, Vulkan, etc), they are instead implementing something that is really very close to Vulkan itself, and making THAT work on any real graphics API. This leads to a simpler system, less overhead, and opens the possibility of making a compatibility shim so that gfx-rs can be used as an open-source portability layer for any Vulkan program to run on any platform – similar to the MoltenVK program. (This effort started long before MoltenVK was open sourced.) This is a really cool idea, and work is in progress on it; people are already talking about things like being able to run vulkano atop gfx-rs.

This is an awesome idea, but has a few drawbacks for ggez:

gfx HAL is not quite done yet gfx HAL essentially Vulkan, and so is rather lower-level than the existing API and making ggez us I don’t know how to use it

Well, those aren’t insurmountable problems. For #1, just wait, or even better, help make it finished. #2 is intended to be solved by tools like gfx-render which will implement a higher-level graphics engine oh top of gfx-rs… It’s championed by one of the devs of the Amethyst game engine, but looks like a cool project so I kind of want to work on it anyway. And #3 is simple to fix, I went through some Vulkan tutorials and then wrote API docs for all of gfx-rs. I am far, far from a Vulkan pro yet, but I can probably write a simple 2D drawing API, and people are eagerly in the process of toolsmithing up some nice helpers.

But we’re still a little bit in limbo at the moment. It’s going to take a significant amount of work to help get gfx-hal stable, make gfx-render nicely usable, port ggez to it, and make a WebGL backend that works nicely. It sounds like fun work, but it’s not going to be quick. However, I am also reluctant to stick with the current pre-ll version of gfx-rs and make the WebGL backend for that work up to standards, since that backend is going to be obsolete soon anyway.

Windowing

A fairly fundamental part of a portable game is dealing with the windowing system: essentially, requesting a window from the operating system and handling input events and so on. This doesn’t involve any actual drawing, it just gives the graphics library something to draw on. In Rust, the main crate that does this for cross-platform programs is winit plus glutin , which wraps winit and also handles OpenGL setup. winit works pretty well in the common cases, though there’s a number of edge cases where it doesn’t work… see this poignant post from one of the winit maintainers (they really need more help!). There are also crates for sdl2 and glfw which do essentially the same task: Get a window to draw on, get input, tell the game when the window closes or resizes or whatever. ggez uses sdl2 , but for working in a web browser, we need… well, something that will work in a web browser.

This is where we go down the rabbit hole. Ready? Hang on…

Starting from the web browser’s view of the world, getting a window is pretty simple. If the page is loaded, you have a window already! You grab the element you’re drawing in, <canvas> or whatever, and you can query it for size, position, and so on. The web browser API already has event-based keyboard and mouse input, and probably gamepad input and such if you work a bit for it, so all of that stuff exists already.

What the web browser DOESN’T have is any kind of pre-emptive scheduling: if you run your game’s main loop in the traditional input-update-draw infinite loop, then it will hang your web browser forever and make life icky. This is annoying, but web browsers give us a function to register a function and call it periodically. So you give it your game’s update() and draw() functions and tell it to call them 60 times a second or whatever, and it works. It takes a bit of Javascript glue code and you have to rewrite your mainloop to use a different sort of model but works okay in the end.

Okay, now this looks fairly straightforward, right? We don’t want to need any platform glue code in ggez itself though, that should be the job of abstraction libraries, so now let’s consider it from the perspective of making web bindings work with winit . winit offers some types like Window that make a public-facing API, and then has a number of platform and os modules that provide the pieces it need to build that. These modules are #[cfg] ’ed in based on the platform you’re compiling for.

So we just add a wasm backend to winit , right? Okay, rustup target list gives us two wasm target triples: wasm32-unknown-emscripten and wasm32-unknown-unknown . These target triples are of the form architecture - vendor - operating system – there’s a few variations; target triples are actually very ad-hoc, so you sometimes you see a 4th ABI element at the end or other stuff, like x86_64-unknown-linux-musl . So… our operating system choices are unknown and emscripten .

Let’s talk about unknown first, ’cause it’s simple: There is no operating system. There is no information about what the actual computer can do at all; it’s the equivalent of wasm’s lack of a host API. You have a CPU and RAM and that’s about it. If you choose something like this it generally means you’re using #[no_std] and writing everything yourself. For the wasm target, Rust helps you out somewhat by providing a very minimal version of std that at least gives you memory allocation and such, and panics if you try to do something it can’t do, like open a file. If you want to talk to a web browser then you have to explicitly write your FFI in Rust and write the Javascript code to bind to it, or use something like stdweb or wasm-bindgen which does that for you. This is basically the equivalent of coding on a microcontroller, or writing #[no_std] code and having to use the raw libc or winapi crates for everything.

Really, wasm32-unknown-unknown is a very hack-y placeholder that is there because wasm doesn’t actually provide any standard API’s. unknown isn’t an operating system, the std implementation gives you a bit of functionality but consists of like 50% unimplemented!() , and it’s really not proper or safe for a program to assume wasm32-unknown-unknown means it’s running in a web browser instead of, say, a standalone interpreter. The program can’t tell what it’s running on, it’s unknown! It’s probably at least partially this sort of arrangement that lead to aturon’s consideration for how to make portability in Rust nicer… or if not, it’s the sort of arrangement that his goals will hopefully make less hack-y.

That said, if you do the manual API binding and assume you run in a browser, wasm32-unknown-unknown generates quite small and very fast modules that are easy to use and debug, and mostly works like plain-ass Rust code. If you can ignore the weird liminal state it exists in, then it works VERY well.

But making a library depend on this sort of tricky is usually not a very good idea; avoiding that sort of “I know the magic to touch the hardware so this program will work on that hardware” setup is exactly the kind of thing portability libraries are there to avoid. Making a winit backend for an unknown operating system is kind of an illogical proposal.

Okay, now let’s talk emscripten. emscripten is what Unity’s magical-pile-of-hacks-that-actually-work uses (or at least, used in 2015; I haven’t touched it in a while and maybe it’s changed). It is, essentially, a Javascript library that runs in a browser and pretends to be a Unix system, or at least a close approximation. Emscripten package also has a C compiler and linker toolchain that can take normal-ass C code and compile it into Javascript (or now webassembly as well) that calls into this library for its system calls. It has a pretend terminal, a pretend file system, and all that stuff. It also has a pretend media API: it exposes SDL2 and OpenGL API’s, and translates them to WebGL and canvas method calls in the browser. It’s awesome. Telling rustc to compile for wasm32-unknown-emscripten generates wasm and some Javascript glue that binds to emscripten’s “operating system” system calls.

Emscripten is also, in my experience, really unpleasant to use. You need a custom toolchain installed on your dev machine, it outputs multi-megabyte binaries which are slow to load and slow to start up, and both building programs and running them is a pretty fragile process. You need some Javascript binding code and custom shims anyway so you don’t actually get to pretend you’re in pure Rust, though they’re not large… but if you’re writing a library then it makes life a bit more difficult to paper that over for your users. The goal is not to make it easy for users to port their ggez code to the browser, it is to make their ggez code run in the browser. Plus, if anything breaks you get giant 1000-line stack traces that hop between Rust, the browser’s Javascript implementation, and emscripten’s guts. And getting really good complete and authoritative documentation on details of emscripten can take a fair bit of digging, though getting started isn’t too painful.

emscripten is also why ggez uses SDL2 for its windowing and events: emscripten provides an API that matches SDL2’s, so IN THEORY any game using SDL2 will compile to emscripten just fine. (Well, and I’d used SDL2 before, so when I was starting to experiment with Rust it seemed the natural choice.) In practice… well, try to build a ggez game for emscripten and it almost certainly won’t work. It might just give you a blank screen, it might give you a black rectangle, it might hang or crash your browser. I’ve gotten all the above at various points in time. Why? I dunno. I’m not a wizard in browser programming or emscripten, and ggez has a couple layers to it and there’s so much cruft in the way that debugging is pretty hard.

So we really have a few options for what stack to use for windowing, in order of preference:

wasm32-unknown-unknown -> ??? -> winit -> ggez

wasm32-unknown-emscripten -> winit -> ggez

wasm32-unknown-emscripten -> sdl2 -> ggez

The one that I really want is the top one, because it will make development simple, it will produce nice small binaries, people who want to use ggez to make games for web won’t need hairy emscripten installs on their systems (with the fun things like version mismatches that might involve), and it lets us finally get rid of the SDL2 dependency and have ggez be Pure Rust™ like we’ve been wanting since forever. That’s also the one that involves the most work, since it basically seems that it would involve personally yak-shaving my way into filling in that big fat ??? section… which involves getting involved in the wasm-bindgen project, probably taking it to the point where we can actually make a wasm32-unknown-web target for rustc and treat the web like the operating system it is, and maybe even getting involved in the Webassembly API standardization efforts. Now don’t get me wrong, this sounds heckin’ fun. But it will also take a long time.

The second choice would be to port ggez to winit , and make winit work Really Well on emscripten. This lets us ditch SDL2, but may or may not make life easier for targeting emscripten … winit on emscripten uses its SDL2 API anyway, so it would put an extra layer of indirection between ggez and where the rubber meets the road. It sort of sounds like the worst of both worlds.

Or, we can continue on with the plan as it originally was, and use SDL2 on emscripten to build ggez programs for web… and have the worst of both worlds along different axes, since we don’t get to ditch SDL2 and we still have to deal with emscripten. Will that actually be easier and better than a custom but far slimmer web API binding? I don’t know.

Audio

Basically the exact same story as windowing. ggez uses rodio for audio output, it has an emscripten backend. We can either use the emscripten backend, or make it work with wasm32-unknown-uknown somehow.

Summary

So… I guess, really, we can either build for the present or for the future. We can sink effort into making gfx 0.17 work better on WebGL, or into making gfx HAL work at all. We can sink effort into making ggez run on emscripten, or making it run on a real web-native API. I really want to build for the future, but building for the present is what made ggez as successful as it is. However, my (not extensive) experience with emscripten has made me rather doubt that ggez will work on it as well as I want it to. I really want to push the state of the art forward, but if I spend all my time doing that, getting ggez running on web in 2018 is probably not going to happen. Which direction is the way to go? I don’t know; that’s more or less what I wrote this whole mess to try to figure out. But, looking back at this, if I do things how I want to be able to do them, my to-do list is, roughly in order:

Get a real web browser API formalized for webassembly Update rustc to have a wasm32-unknown-web target Make wasm-bindgen play nice with it if necessary Make winit and rodio and probably gilrs have backends for wasm32-unknown-web Help finish gfx HAL Write gfx-render (oh, and probably port gfx_glyph to it) Write a WebGL backend for gfx Port ggez to winit and gfx-render Hey presto, now it all works perfectly on the first try, with no effort at all!

Amazing I ever get anything done, isn’t it? (I very often don’t.)

I think that having ggez able to build games that run both on desktop and on the web, easily and transparently with minimal drama, is a major killer feature for both ggez and for Rust as a whole. I mean, what else can do this? Unity3D, and… that’s about it, as far as I know (though I confess I haven’t looked that hard). I mean, think about that kind of power: Unity3D can do this, and so can some little game framework bashed together by hobbyists over a year or so. That’s the power of Rust, and the power of Webassembly. We have immensely powerful tools, but they still have a lot of rough edges, and there remains lots of building to be done to support them. But this is the future we’re working towards, and it is happening now.

Hop on the train; whichever way we take, it’s going to be a fun ride!