The Browser Wars

In the year of our lord, 1990, the first web browser, WorldWideWeb, was born. It was a peaceful, enlightened time where text-only information flowed unmolested between intellectuals for the purpose of pushing the boundaries of scientific discovery.

Three short years later in 1993, that period of relative peace and tranquility was shattered by the Browser Wars. Mosaic, the first popular graphical web browser was released and it fueled the meteoric rise of the Internet. Later renamed Netscape, it fended off a bevy of competing browsers to capture a near 90% share of all Internet users in just two short years. But such success would not go unnoticed.

In 1995, tech giant and evil empire Microsoft unleashed its first web browser, Internet Explorer 1.0 — a licensed rebrand of the Mosiac browser. A bloody and protracted battle ensued between Netscape and Microsoft as they fought for Internet dominance. In war, technological advances abound, and this war was no exception. Microsoft created CSS and iFrames. Netscape created the amazing <blink> and<marquee> tags. And of course, we can’t forget everybody’s favorite web language..

On the 11th Day He Rested

In May of 1995, Brendan Eich created JavaScript…in 10 days. On the 11th day, he rested.

Actually, it was originally called LiveScript, but he later changed the name to JavaScript despite having nothing to do with the Java programming language. Which is funny because although it got its list of reserved words from Java, its syntax is based on C. It’s an object-oriented language…but barely. And it’s a functional language… but just barely.

Microsoft — in true Microsoft Fashion — responded quickly by reverse engineering JavaScript and releasing it in Internet Explorer 3.0 as “JScript”. The name wasn’t the only thing that was different; of course it had to have implementation inconsistencies — same with CSS and HTML. It just wouldn’t be Internet Explorer without them.

Ultimately, Netscape was unable to compete with the juggernaut that Microsoft had become. After all, Microsoft controlled Windows, and Windows dominated the computer industry. While Netscape relied corporate licenses for revenue and explicit downloads for the free personal edition, Microsoft simply forced IE on all its users. Netscape’s marketshare dwindled to less than 4% of all users by 2002.

The Internet bubble had burst, and with it, all the optimism surrounding the Internet. Microsoft’s anti-trust case resulted in a mere slap of the wrist. A dark shadow was cast over the World (Wide Web). Hope gave way to dispair. The Web stood upon the edge of a knife. Stray but a little, and it would fail, to the ruin of all. Yet hope remains, while the company is true.

A seed had been planted in the midst of all that chaos; Netscape had the foresight to submit JavaScript to ECMA International as ECMAScript in 1996 at the height of their battle with Microsoft. It was a decision that would change fate of the Web as we know it forever.

From the Ashes Arose a ..FireFox?

Like Darth Vader’s Empire (or Kylo Ren’s First Order for you younglings), Microsoft’s grip on the Internet appeared unyielding and absolute. Practically all browser innovation ground to a halt after Microsoft released its infamous Internet Explorer 6.0. Like some sort of dystopian government issued slurry, web developers were forced to survive off of a browser locked in time. Microsoft turned its focus to other sinister projects like ActiveX, FrontPage, and SilverLight.

But the clouds began to clear. It was around this time the open source movement started to truly pick up steam. Linux was proving that thousands of distributed hackers working in their spare time could write a better operating system than the largest tech company in the world. Netscape had open-sourced much of the code for Netscape Navigator to be shepharded by the non-profit Mozilla Foundation.

Originally (and more appropriately) branded “Phoenix”, FireFox was released in 2002 and would introduce features like a separate built in search bar, tabbed browsing, private browsing, a built-in spellchecker, and a download manager. Y’know, like, all the amazing features we take for granted today; the amazing fearures Microsoft should have been working on? Geeks throughout the world — sick of supporting their family members’ spyware filled computers — began installing and promoting Firefox and Mozilla slowly started to recapture Internet marketshare.

Most importantly, Mozilla put web developers first. They provided best-in-class debugging capabilities; first as integrated features and later through 3rd party extensions like FireBug. Suddenly it was cool to be a web developer again. A renewed focus on accessibility, clean, semantic markup, and table-less layouts sparked a renaissance of web design. The tides of the Web were about to change once again.

A New Hope

Slowly but surely, FireFox began to claw back market share from the Empi.. —er, I mean Microsoft—reaching its peak at 30% of global marketshare in 2010. Like most large occupying forces, Microsoft was slow to respond. Along the way Mozilla found a powerful ally in a little Internet startup called THE GOOGLE who paid handsomely for the privilege to be Firefox’s default search engine.

We were entering the Web…2.0!!!

Focus shifted from mass consumption of mostly useful content to mass creation of mostly worthless content. Blogging, multimedia sharing, public APIs, mashups, and social networking exploded in popularity in the years following 2004. The open technologies developed during the heights of the Browser Wars were now starting to be utilized to their full potential. Closed and invasive technologies like ActiveX, Java, and Flash were being tossed aside in favor of more open, standardized, and accessible web technologies.

The web was no longer just network of linked together documents. It could be interacted with, contributed to, remixed, watched, played with, mashed up, pitched, and IPO’d. It began its disruption of entire industries who were clinging to dying business models. As more and more people joined in on this second renaissance, a new bevy of tools were created to aid developers in creating these much richer experiences.

Click. Wait. Click. Wait.

Now I will say something that may shock some of you younger folks. You will likely dismiss it as the ramblings of a worn and bitter old curmudgeon.

There was a time when the vast majority of websites didn’t require JavaScript to be enabled at all.

(dramatic pause)

Most web apps in the late nineties and mid 2000’s were completely stateless. In other words, everything that the browser rendered was thrown out and replaced wholesale on every single click. For developers, this was wonderful. You only needed to worry about state on a per-page basis, and that logic only lived in one place: the server.

It was around this time I started my web development career. I graduated from the U with a degree in — of all things — Physics with the hopes that it would land me a job at Barnes & Noble or a trendy Coffee Shop. But like many of my fellow college classmates, things didn’t go according to plan (you need at least a master’s to get a job at Barnes &a Noble).

Luckily, my much smarter brother suggested I take a look at a new web framework called Ruby on Rails. I was bored and unemployed so I thought “why not?” and gave it a shot. I then proceeded to gain more practical knowledge in two months than I had in 6 CSci classes at the U of M. With Rails, software development finally clicked for me.

I was hooked and I went on to learn the ropes of freelance development and play startup roulette whenever I could afford to. Rails was the perfect companion — but then again, I am a frameworks guy. I could prototype fast and deliver multiple projects on time and I only needed to know Ruby and a bit of CSS and HTML to do it. It was a simpler time. It was a wonderful time. But then Google had to go and ruin it by creating Gmail and Google Maps.

Stateless web applications are great for websites that primarily serve text, but if you ever used Hotmail or Mapquest back in the early 2000's you understand why Google’s competing offerings marked such a dramatic inflection point for the web development community. They were the first high profile demonstration of what could be done with JavaScript and AJAX.

AJAX — or Asynchronous JavaScript and XML — was technology that had laid dormant for years before being utilized. Its roots are in the <iframe> tag, which ironically was introduced by Microsoft in ’96 at the height of the first great browser war. Iframes were the only way you could update content without a full page refresh, but it was limited to the iframe itself. Microsoft later added the ability to update any content on the page without a refresh by building on their original iframe technology.

This meant a client-side web application could be built entirely in JavaScript. In this way, web sites were becoming more like desktop applications, and the browser was becoming more like an application platform. But unlike most platforms, the web browser was everywhere. The funny little language written in 10 days that stole Java’s name was about to steal its thunder, too by becoming the first true write-once-read-everywhere language.

Any Application that Can Be Written in JavaScript, Will Eventually Be Written in JavaScript

“JavaScript was written in 10 days.”

“JavaScript is slow.”

“JavaScript is for people who don’t have computer science degrees.”

“JavaScript is the duct-tape of the web”

“JavaScript is a terrible language”.

It’s no secret that JavaScript is a controversial programming language. You could argue that it was hastily shipped as a feature, that it lacks features most modern languages have, that large applications cannot be built using it (let alone maintained), and that the only reason why JavaScript is so widely used is because it’s the default language of the web; and you wouldn’t really be wrong on any of those points.

But you’d be missing the larger overall point about JavaScript — it’s the only language you can write that will run pretty much everywhere. It wasn’t like JavaScript didn’t have any serious competition; Flash threatened to take over the web for a while, Java made serious inroads with its applets, and Microsoft made many attempts to subvert the language and direct the web toward their own interests.

But through it all, JavaScript has endured. JavaScript… finds a way.

The JavaScript ecosystem was pretty immature at this point, but as the complexity of web apps got pushed out to the client, so too did our appetite for modern tools and libraries. Out of the plethora of such libraries emerged a clear favorite; one that still influences the Web to this very day.

Write Less, Do More

jQuery rose to dominance because it was especially good at providing the intuitive API that the W3C didn’t. In fact, many of the concepts introduced by jQuery are now being natively added to browsers.

jQuery also smoothed out browser API inconsistencies, including simplifying the tricky Ajax API which was becoming increasingly utilized. jQuery made developing in JavaScript actually fun, and can be credited for being the one of the main reasons why it has surged in popularity over the past decade.

We also started to see the first glimpse of the componentization of the web. jQuery widgets and plugins allowed developers to enhance existing HTML elements with behaviors went far beyond the tags specified by the W3C. And they were shareable, too. If you had a need, there was probably a jQuery plugin or widget to fill it; the problem was in wiring them up and the lack of a standard interface.

As convenient jQuery made JavaScript to use, it was still mostly thought of like a gaudy spoiler on a suped up ’98 Honda Civic; flashy, but ultimately superfluous. JavaScript engine performance was such that you really did have to be Google to build apps like Google Maps. That of course all changed when we turned in the 4-banger for a supercharged V8 engine, wrapped in shiny, stunning Chrome.

Bro, Do You Even SPA?

Chrome burst onto the web scene like a nuclear bomb. Google — no longer comfortable with depending on Mozilla and Microsoft to advance Web Browser technology — decided that in order to push the web forward, it needed build its own browser.

So in the Year of Our Lord 2008, Google officially entered the Great Br0wser Wars with the release of Chrome.

With Chrome and its exceptionally fast V8 engine, JavaScript could be pushed to absurd new limits. Every couple of months you would see reports come out about how much faster JavaScript ran in Chrome, which pushed the other vendors to improve their JavaScript engine speeds.

Then someone had the bright idea to rip V8 out and throw it on the server, and NodeJS was born. That spawned NoSQL databases like MongoDB and CouchDB, and for the first time in history you could be a full-stack JavaScript developer. You think people hate JavaScript? Ask them about Node.

This all paved the way for a calvacade of bloated, over-engineered web applications. I remember my first attempt at writing such an abomination from scratch using jQuery. It didn’t take long to end up with what is commonly known as “jQuery sphagetti code”. I did however learn few things along the way.

1) Don’t do that.

2) Look for a library or framework written by someone smarter than you.

Being a rails guy, I gravitated toward a framework called JavaScriptMVC. With some structure to build upon, I finally started to get a handle on what building a single page application was like.

Pretty soon everyone started jumping on the SPA (Single Page Application) bandwagon. It didn’t matter whether or not it even made sense to; it was clear that it was the Next Big Thing™ and everyone had to do it. Twitter was one of those companies who felt the urge to SPA. With Twitter, people’s very thoughts were being uploaded to the Web en-masse. Tim Berners-Lee must have been proud. Unless of course he were to try to load it up on a browser with JavaScript disabled and be presented with this:

Think for a second about what Twitter effectively provides as a service: it lets people put text on the internet for other people to read.

That’s pretty much it…

This problem was solved 25 years ago.

The point is, there are good candidates for SPA’s and there are bad candidates for SPA’s. Gmail and Google Maps are good candidates; Twitter and blogs like Gizmodo…not so much. When your entire business model revolves around displaying text to as many people as you possibly can, it’s probably best to keep it simple (stupid).

One could argue that SPA’s run contrary to the very spirt of the Web.

They often break the browser’s back button.

They’re often unlinkable.

They’re virtually invisible to screen readers and web crawlers.

They put unreasonable demands on underpowered devices.

Where the web used to be a vibrant sea of information that could be indexed, remixed, and mashed up, it’s slowly becoming a fragmented collection of walled gardens. SPA’s are bad in all the ways native apps are are bad, but without the performance benefits and access to most OS-level APIs.

But, this hasn’t stopped us foolish web developers from charging ahead with a never ending parade of tools, libraries, and frameworks to convert our perfectly good stateless web apps into SPAs. So let’s roll up our sleeves and examine the current landscape of Web Development tools.

Libraries vs. Frameworks

So much of the landscape of modern web development is a culmination of the great Libraries vs. Frameworks debate.

While Frameworks eliminate choice paralysis and imposter syndrome, they can be too magical and hide too much behind layers of abstraction. Libraries are easy to understand and provide freedom of choice, but you are often worried about whether or you’re doing things The Right Way™.

The truth is libraries and frameworks both end up being equally complex to work with precisely because the problem of building large applications is inherently difficult. Frameworks make the first 80% of a project a breeze while leaving the real work to the last 20%. I find the reverse is true for Libraries.

Really, it all comes down to personal preference. Ask yourself:

Are you the type of person who is more likely to believe you can do something better than everyone else, or are you like me and more likely to defer to those you believe have more expertise than you?

Are you decisive or do you agonize over the smallest choices?

Do you feel a compelling need to understand how everything works, or are you willing to implicitly trust other people’s systems?

Are you a get-things-done type of person, or a do-things-right type of person?

These are questions to keep in mind while we examine two of the most popular offerings available today.

Transclude This!

Backbone.js — a view library for its tiny api footprint — was the first modern JavaScript tool to gain significant popularity in the age of SPAs. There were also batteries-included frameworks popping up at the time like Ember (which had been extracted from Apple’s Sproutcore framework) but nothing that became as popular as Backbone.

Then a small team within Google tasked with revamping the DoubleClick front-end open-sourced the first version of a framework they called Angular (named for the angled brackets used in HTML). In fact, the stated goal of the Angular team was to embrace HTML and enhance it rather than abstract it away. They also put heavy emphasis on testing which was rather unheard of for JavaScript apps at the time. Being a Rails guy, this was a breath of fresh air for me.

With libraries like jQuery and Backbone the markup was completely decoupled from the JavaScript. Elements had to be updated manually and passing information between components was a mess. With Angular, html elements could encapsulate their own behaviors; allowing them to be composed just like like the any other HTML tag. It’s actually the dream of Web Components (but we’ll cover that later).

Angular’s two-way template binding felt like magic. It relieved us developers from having to write the same boilerplate logic to update the DOM again and again like we had grown used to doing with libraries like jQuery.

Angular’s popularity skyrocketed and enjoyed a brief period of supremecy as the new hot tool of choice. But this is the front-end, people — nothing stays on top for more than a couple years.

Although the hype surrounding Angular at the time was deafening, it had its stark critics who criticized its large API, use of eccentric terms like “transclusion” and “directive”, and relatively high learning curve. Naming has been a recurring problem for the Angular community. Its popular test runner Karma was originally called “Testacular” before it raised too many eyebrows. Two-way binding was also widely criticized as an anti-pattern that could lead to state management hell.

Then the Angular team found themselves between a rock and a hard place. Because they were working a few desks down from the Chrome core development team they were acutely aware of the coming Web Components specification. The problem was they hadn’t written Angular 1.0 to be compatible with them — instead treating it as a stopgap solution. Despite its popularity, they made the difficult and controversial decision to deprecate the entire framework and create a new version from scratch called Angular 2.0.

The news that there would be no backwards compatibility or clear upgrade path sent shockwaves through the community. Angular might have been able to ride out this period of uncertainty if it had not been for an little open source project started by none other than Facebook.

For Every Action…

The story behind React goes something like this: Once upon a time, some Facebook front-end engineers were having a hard time getting a notification icon to work.

MVC had failed them in a big way (or they had failed MVC, I forget which).

Anyways, they assembled the greatest front-end minds in the company to examine the problem. They came to the conclusion that “MVC doesn’t scale”. As we all know not scaling is one of the 7 deadly sins of OSS libraries (like the sin of not being backed by a large software company or the sin of existing for a more than 4 years).

So they did what any reasonable, pragmatic software development team would do when faced with a common problem that’s been solved many times by much more experienced developers: they took a proven, off-the-shelf library and integrated it following industry standards and best practices….

..nah, just kidding. This is the front end, people — they rolled their own framework!

Obviously this is a dismissive tongue-in-cheek oversimplification of the reasoning behind creating React and Flux, but it’s important to take a step back and ask ourselves why the tools we use were created in the first place. The truth is often far more nuanced than you think it is. The problem of getting a notification icon to be consistent across disparate components was symptomatic of a much larger problem that exists when creating and maintaining large, dynamic, single-page applications: managing state.

State management was easy before the rise of JavaScript because web applications were, well… stateless. They were constructed on the server and the browser was simply a render target. Every click would guarantee a new approved state of the user interface as provided by a single authority. However, as web clients became thicker and ajax used more heavily it was possible for any dark corner of the application to modify the global state known as the DOM without regard for consistency. Global state — as every good developer knows — is a very bad thing indeed.

On a small enough team building a simple enough SPA, these problems can be mitigated through good communication and engineering processes. But when you have dozens of teams working on the same application (e.g. at Facebook’s scale) you’re going to have a hard time not stepping on each other’s toes. As was mentioned before, Facebook is a PHP shop and its engineers pined for the good ol’ days when you could simply refresh the page with completely new content — guaranteed to be consistent and correct. But they knew they couldn’t go back to the old click-and-wait pattern that users no longer had the patience for. So they started out with a very radical proposition.

They asked themselves “what if we just replaced the entire body of the DOM with the entire updated UI every time any new data is retrieved using innerHTML?”. Sound expensive? Well, it was. But not as expensive as you might think it was. Intrigued, they started working backwards from that conclusion, crafting a very sophisticated diffing algorithm which they cleverly branded the “VirtualDOM” which essentially acts like a single call to innerHTML but that only replaces the elements that actually changed rather than the entire tree of elements. In doing so, they were able to simulate the same mechanism stateless web applications relied on to render templates generated on the server. No more worrying about those pesky notification icons, the server was now back in the driver’s seat.

React has sparked a growing movement towards functional programming for the web. A pure function only takes inputs and returns an output — much like React components simply take properties and return DOM elements. The beauty of this approach is that you can define your entire app as a series of calls to these functions, reducing the current state of the application down to its new state.

Redux, my current favorite JavaScript library, stores the entire state of your application as one big object and simply calls reducers on it to transition to the next state in response to inputs from the server or the user. It’s a beautifully simple way to manage state in an application. Pure functional languages like Clojurescript and Elm are growing in popularity as well. Needless to say, there’s no excuse to be bored if you’re a JavaScript developer.

Web Components and the Componentization of the Web

The truth is, React, Angular, Ember, and to a certain extent, jQuery (via its “widget” pattern) are — in a way — all glorified polyfills for web components. But what are web components exactly and why should you care? When HTML5 landed it brought with it a bevy of new tags like <canvas> and <video> that allowed for the creation of much richer and more semantic web sites. It also brought with it more API’s for interfacing with OS level APIs like geolocation, websockets for realtime pub-sub messaging, support for offline apps, web storage, and more.

But what became clear to many developers, including the Google Chrome core team, was that custom tags should really be created by the wider web development community rather than decided on by a slow moving standards committee and implemented natively by browser vendors six ways from Sunday. This was the conclusion the wider web development community had come to may years before, of course, which is why tools like jQuery, Angular, Ember, and React were invented.

Web components are really a formalization of this wider desire for a standard around creating and sharing custom HTML elements. While it was great that independent efforts had allowed developers to componentize their own applications, they are often not compatible with each other. This leads to a ton of churn creating the same components X number of times.

Web components consist of four key technologies.

Custom Elements — this is what comes to mind when most people think of web components. Up until now, the set of tags that we had available to us were finite and dictated by the W3C. With web components, we can now tell the browser about a completely new tag and it will inherit the base prototype of the standard HTML element along with whatever additional behaviors we want to include. Finally, the internet can have its own <cat> element. Templates — a bevy of templating solutions have cropped up over the years because of a lack of a sane way to easily stamp out chunks of markup to be manually added to the DOM using JavaScript. With the Template tag, a standardized way to do this has emerged. Shadow DOM — Not to be confused with React’s Virtual DOM. Part of component-based software engineering is the concept of separation of concerns. CSS by its very nature relies on poluting the global scope. This is the way it “cascades” after all. But when you want to break your application into isolated components the last thing you want is some other element’s styles infecting that of other components. Shadow DOM allows the isolation of styling at the element level. Outside styles can’t get in, and inside styles don’t leak out. In this way, a web component can be transported into any web application and retain its look and feel. HTML Imports — this is one of the more controversial features of the web components specification and is a big reason why it has taken so long for the major vendors to agree on a 1.0 web component specification. It basically provides a way to easily import all a web components’ dependencies via a single require tag so that all anyone would need in order to add someone else’s web component to their page is include a single tag and go about adding that element to their page where ever they like. This specification is in direct competition with ES6 modules, however, and the wider web community is still very much divided on the subject of dependency management.

While these features sounded good to most people in the industry at the time, getting the major browser vendors to agree on how they should be implemented was another story altogether. First proposed in 2012, Web Components are only just now starting to show up in the major browsers with Google’s Chrome being at the forefront of adoption. In fact, it wasn’t until last April (2015) at the W3C summit that final agreement on a 1.0 spec for Web Components was reached, and that only happened because somebody locked them in a room and told them to reach a compromise on each of the features proposed in the spec. They agreed on implementations for three out of the four before lunch and knocked the last one after reconviening.

So now that we have a 1.0 spec and we’re starting to see the full web component specification show up in the major browsers, we can stop the flamewars around tools, frameworks, and libraries. We can stop running from solution to solution like Sneetches with stars upon thars. We can finally share our work and extend the web as one, cohesive community. We can move past the wild west web days where the only constant was change, wheels were made to be reinvented, and fashion-driven-development was the norm. Right?

I mean… right….?

“I definitely think it’s the wrong programming paradigm; I really hope that web components do not succeed”. — Pete Hunt, React Core Developer

Uh oh..

There is a lot of stuff you get for free when you build like the browser doesn’t exists and this is one thing that distinguishes React from Web Components and Polymer and that kind of thing …they’re getting closer and closer in with the browser, and we’re getting farther and farther from the browser. And I think that our technique is more sustainable in the long term. — Pete Hunt

False alarm, people. Take down the streamers and let’s get back to work. You see, what Pete is saying here is consistent with the direction the React ecosystem has been heading, and it’s very much influenced by the dichotomy between Facebook and Google. Google is one of the four major browser vendors. Its business model is hinged upon people being on the web. Any fragmentation of that ecosystem into walled gardens like the Apple App store or Facebook hurts Google. So to them, web components and web standards are a big deal. Heck, Google was the group that propose web components in the first place! This is why Google is pumping such an extraordinary amount of resources into its Polymer and Material Design initiatives. They’re saying “keep building for the web”, by which they mean “keep users on the web!”.

Facebook on the other hand is chalk full of brilliant, but mostly young engineers, who find themselves in a precarious position. They control neither a major browser nor a major operating system. They are fully at the behest of those organizations who do, organizations who just happen to be competing for the same advertising dollars Facebook desires.

So the engineers at Facebook in a stroke of maniacal genius said “to hell with the W3C and to hell with best practices!” and decided to completely abstract away the browser, add html to their javascript files, dogs and cats living together, mass hysteria. As the old adage goes, “when life gives you lemons, abstract them into lemonade!”

Sure, they could have pumped the might of their collective engineering prowess into improving the rendering performance of the major browsers and the wider development community would have benefited whether or not they chose to use React.js for their next project, but why help Google by donating precious engineering hours? To React, the browser’s DOM is a render target and nothing more. React is happy to spit out DOM, Canvas graphics, native mobile views, or plain old strings if you want to render from the server.

And therein lies the brilliance of Facebook’s approach. Their fate is no longer tied to forces they cannot control. So long as their engineers are aligned in using React for all their UI programming needs, they can target all platforms without relying on enclaves of disparate programming disciplines. JavaScript suddenly isn’t looking like such a bad career choice, is it?

So what does this mean for the future of the average web developer? Is Facebook right about the web browser being a relic from the past? Should it be treated as just another render target for our applications? Is it ok to over React?

Or do we take Google’s lead and double down on the web, building our apps with Web Components and embrace the ever increasing functionality being added to web browsers all the time? Should you transclude your way into the future with Angular 2.0 or poly-fill-it-’til-you-kill-it with Polymer? Where exactly do we go from here?

JavaScript? Where We’re Going, We Won’t Need Javascript

Despite the rising popularity of JavaScript, people have been doing their best to avoid writing in it. One only need look at the sheer volume of languages that compile down to JavaScript to realize that while it may not be the best language available, it simply cannot be ignored.

A common refrain among web developers today is that of front-end development fatigue. The sheer volume and variety of libraries, tools, frameworks, specifications, and programming patterns has become overwhelming for many. It can feel as though the ground is constantly shifting beneath your feet. This churn is both the beauty and the tragedy of the open web.

Sure, we’re reinventing wheels more than a caveman with Alzheimer’s, but every once and a while something genuinely innovative rises above the crowd. It would certainly make life simpler to have our tools prescribed to us by a benevolent authority like Apple does with its iOS and OS X platforms, but we might also miss out on some killer innovations.

It’s been a genuinely fascinating journey so far. JavaScript, that terrible, amateurish, “wat” filled language, has proven to be far more versatile and powerful than people ever expected it to be. Because it straddled the line between being an OOP language and a functional language it allowed the wider web development community to explore both paths. Grizzled systems developers who harbor secret desires to write Lisp for a living are now starting to take notice and spread the functional programming gospel to a new generation and platform.

Yet as it turns out, JavaScript is optional. More and more of us JavaScript developers aren’t even writing JavaScript anymore, even when we are writing JavaScript.

We’re time-traveling to the future and bringing back versions of JavaScript that haven’t even been fully specified yet to write our code in and then transpiling down to JavaScript 5.0.

We’re writing coffeescript because we miss how happy Ruby made us feel.

We’re writing ClojureScript because we want to wield the power of God and the Universe.

We’re writing in statically typed JavaScript equivelants like TypeScript and Flow because it’d be crazy to write anything in a dynamically typed language amirite?

We’re even writing C and compiling it into JavaScript byte-code using Emscriptn, LLVM, and asm.js and getting near native performance inside the browser, making it possible to run applications and games like Unreal Tournament.

It’s important we take a moment to pause here and reflect on everything we’ve covered. Tim Berners-Lee’s original vision of the Web didn’t include transpiling, compiling, minifying, or virtual-doms, and it certainly didn’t include anything about it being an application runtime.

Yet here we are, trying our very best to make it one. Web browsers have been in development over 25 years now, and they’ve accumulated a lot of baggage over the years. They weren’t designed to handle scrolling thousands of on-screen elements at 60 frames per second. They weren’t designed to handle notification icons. They weren’t designed to run Unreal Tournament or Photoshop. They’ve always had to contend with the albatross of backwards compatibility.

At their core, they are document rendering applications, but with more stuff bolted onto them than a Mad Max set piece. They were never designed to do what they’re being asked to do today, and if they were they’d look a lot different than to do now.

So maybe the React core developers are right to abstract the browser away. Maybe the browser is becoming the open and ubiquitous platform that Linux wanted to become. But if that’s the case, why use React? Why even write JavaScript for that matter? If we can now compile C down to JavaScript and get near native performance in the browser, then why not choose any C-based language you prefer to build your web apps with? If WebAssembly becomes viable–and it looks like there’s a good chance it will–what exactly does it mean to develop for the web any more? Indeed, what exactly is the Web if not the world’s most widely available application platform?

Hello Web, Goodbye World

There’s a key component of the original vision of Tim Berners-Lee’s Web that I glossed over, and one that may very well be at risk of fading away; HTML was designed to be written by people. Not programmers. Not programs. Not compilers. People. Actual, ordinary, every day people like the scientists at CERN, the cat lover in their basement, the burgeoning young author with controversial ideas, or the oppressed political activist fighting for freedom. People who have better and more important things to do than keep up to date with the latest hipster devstack trending on Hacker News. People for whom Twitter being blocked in their country or going down is more than a simple inconvenience.

This was the whole point of the World Wide Web. Anybody could participate in content creation and it didn’t require any fancy tools, compilers, frameworks, IDE’s, social media accounts, private web services, closed-source software, or college degrees.

That is what the web was all about back in the 90’s. Whip open a text editor, throw some HTML tags and text into a file, link to some other people’s files, and toss it on a webserver. Sure, it wasn’t always pretty, and there weren’t many billion dollar valuations, but it was as open and accessible as it will ever likely be again. And perhaps that’s why Web Components give me hope. What if it were possible for anybody to stitch together a rich, dynamic web application to solve their niche little problem without being a professional programmer? How many innovative creations would we miss out on if developing software for the web was relegated to code wizards in ivory towers?

Embracing the Web with Progressive Web Applications

There is a growing movement that seeks to embrace the desire for rich, dynamic web applications without throwing the baby out with the bath water. Common sense guidelines that seek to remedy some of the common problems associated with single page applications are starting to get attention. Instead of ignoring the browser or abstracting it away, they embrace it. Instead of leaving devices behind, they adapt to any set of capabilities. Instead of living in privately controlled stores, they live on the web.

They’re called Progressive Web Apps, and they are defined as:

Progressive — Work for every user, regardless of browser choice because they’re built with progressive enhancement as a core tenet.

Responsive — Fit any form factor, desktop, mobile, tablet, or whatever is next.

Connectivity independent — Enhanced with service workers to work offline or on low quality networks.

App-like — Use the app-shell model to provide app-style navigations and interactions.

Fresh — Always up-to-date thanks to the service worker update process.

Safe — Served via TLS to prevent snooping and ensure content hasn’t been tampered with.

Discoverable — Are identifiable as “applications” thanks to W3C manifests and service worker registration scope allowing search engines to find them.

Re-engageable — Make re-engagement easy through features like push notifications.

Installable — Allow users to “keep” apps they find most useful on their home screen without the hassle of an app store.

Linkable — Above all, to preserve the original intent of the World Wide Web, apps must be linkable.

The idea is that we have at our fingertips an impressive variety of browser technologies that, like AJAX was, we aren’t really sure how to utilize in an effective way. The mobile revolution further muddied the waters as developers and users flocked to native apps for their computing needs. Mobile browsers have stagnated in much the same way Desktop web browsers did after IE 6.0 was released. High profile companies like Facebook and Linked in abandoning HTML5 as a business strategy cast further doubt into the web as a platform.

But industry giants have bet against the Web before, and always to their own peril. Could it be that we are on the cusp of another Web renaissance where we once again reject walled gardens and closed platforms in favor of open technologies and information sharing? Perhaps what we need as an industry is for someone to step forward and show us all what’s truly possible in modern web development today like Google did with Gmail and Google Maps.

For a few glimpses into this future, we can look at apps like pokedex.org. It utilizes web workers to handle data operations without blocking UI logic in the main thread. It’s offline first to guarantee quicker initial load times and availability during periods of spotty connectivity. A react-like virtual dom is utilized to efficiently update the UI on the client and to pre-render the initial state on the server-side.

The results are rather impressive. As some of you no doubt know, there are a lot of kinds of Pokemon out there. Yet this humble little progressive web app chews through them like a native application. It certainly wasn’t a trivial effort, and the developer lists several challenges yet to be surmounted like spotty API support across browsers. But this is no different than the challenges Google Maps had to overcome to show the world what was possible.

Google, of course, is very invested in this version of the future of the Web. Its Chrome Dev Summit site follows many of the progressive web app guidelines.

Users find the app the way they find any other website — through the web browser’s address bar. It’s not bubble-wrapped inside a native wrapper and served through a proprietary store. From there, they can add it to their homescreen where it functions like any other native mobile application — top-level, full-screen, and offline capable.

It’s possible that we are on the cusp of a new renaissance in web development, similar to Web 2.0 where we experienced a return to open standards, progressive enhancement, and accessibility. The burning question in my mind is, however, is this renaissance happening too late?

Don’t Be a Sneetch

We’re at an interesting crossroads here as Web Developers. I struggle with it regularly because I honestly can’t say with any certainty what the future will hold. Is there any point to writing these large web applications in JavaScript, HTML, and CSS if we can use languages, frameworks, editors, and tools that have existed for far longer with far greater capabilities and track records? Should we even be making complex applications on the web in the first place? When does a website become sufficiently complicated enough to be considered a web application? What should we focus on learning so we don’t get left behind? Can we ever truly master our craft if the craft itself is always changing so fast?

Sometimes I feel like I’m seeing it all at once, and it’s too much, my heart fills up like a balloon that’s about to burst… And then I remember to relax, and stop trying to hold on to it, and then it flows through me like rain and I can’t feel anything but gratitude for every single moment of my stupid little life… You have no idea what I’m talking about, I’m sure. But don’t worry… you will someday.

We all have to make our best guess about what the future will hold. My advice is to focus on what makes you productive today and ignore all the noise and hype about what’s just around the corner. Ignore people like me who are foolish enough to lean precariously over the bleeding edge. Don’t be a fashion-driven-developer. Don’t be a Sneetch! If you’re productive with jQuery, use jQuery! If Google Web Tools is your jam, then jam on! If you want to create a progressive app, then send me a link when you do. If you want your web apps to be completely stateless and rendered server-side, it’s likely you’ll do just fine (just ask Craigslist). Build for today, ignore the hype, and embrace the future when we get there.

One thing I am certain of: No matter what happens in the future, hang on tight, cuz it’s gonna be one hell of a ride.