X-No-Thanks

Last week I spent some time dissecting what, I believe, is the reason behind the announcement that Windows Internet Explorer (née “Microsoft Internet Explorer”) will, as of version 8, include the ability to emulate previous versions of its rendering engine and, in addition, will default to emulating IE 7 when no version is specified (with the exception of HTML 5 , which reputedly will trigger a genuine “standards mode” in IE 8).

At the time, I was only concerned with analyzing why Microsoft has chosen this route; I was mostly thinking out loud and trying to determine the causes behind this unexpected shift in IE’s policy toward compatibility and standards compliance, and held off expressing any judgment of its merits, largely because I simply hadn’t had enough time to think about it.

Over the course of the weekend I spent quite a bit of time letting this percolate in the back of my mind, and I think I’m ready to come to a couple of conclusions:

The X-UA-Compatible behavior will have a net negative impact on designers and developers. The X-UA-Compatible behavior will not solve the problem Microsoft says it’s worrying about.

To explain why I’ve ended up at these conclusions I’d like to start, as I often do, with a definition.

Our job

Web developers and web designers, despite long-standing differences of focus and eternal arguments about such things as their choice of tools or preferred techniques, ultimately have in common a single, simple job description: use the knowledge and tools at their disposal to produce web-based systems which work, and work well, for the people who will be using those systems. Everything else — standards, browser compatibility, markup languages, programming languages, layout techniques, the whole shebang — is simply an implementation detail serving the ultimate end of producing something which works for a particular group of users.

Jeff has, I think, been trying to articulate this for months now, and frequently his attempts have been overlooked by people who’d rather fight than think. And that’s a shame, because he’s spot on in his belief that particular techniques — CSS versus tables, Flash versus JavaScript, CSS “frameworks” versus entirely hand-rolled style sheets — can only be meaningfully evaluated insofar as they serve a larger goal. Or, as Jeff aptly put it:

I’m done trying to convince anyone else why they should use the same tools I do. I want to steer the direction of this blog towards talking about great design on the web. I’m less interested in the means to an end, and more interested in the end itself.

Of course, there is no Platonic ideal of “a web site that works” or — to use Jeff’s phrase — “great design on the web”, and even if there were, there is no set of tools or techniques which is guaranteed to produce that result. Anyone who reads this blog regularly will recognize that I’m heading back into one of my favorite slogans: all of the choices made by designers and developers as they work to produce something that serves their users is, ultimately, a trade-off of one sort or another:

Using CSS instead of tables is a trade-off.

instead of tables is a trade-off. Using tables instead of CSS is a trade-off (anyone who thinks that developing sites which reliably worked across browsers used to be easy before the rise of the standards movement should contact me privately about a bridge in Brooklyn that I’d like to sell).

is a trade-off (anyone who thinks that developing sites which reliably worked across browsers used to be easy before the rise of the standards movement should contact me privately about a bridge in Brooklyn that I’d like to sell). Using Flash instead of JavaScript is a trade-off. Using JavaScript instead of Flash is a trade-off.

Using any or all of the above in combination with each other is a trade-off.

It’s trade-offs all the way down: as the laws of physics eloquently remind us, you can’t get something for nothing. All we can do is weigh the options and their impacts and decide whether a particular trade is worth making for a particular situation. And it’s in that context that designers and developers must look at X-UA-Compatible : it represents a trade-off, so we need to know what we’re getting and what we’re giving up.

The trade

In order to fully understand what “we” — web designers and developers — stand to get and stand to give up as a result of X-UA-Compatible , it’s important to note first that “we” are not a homogeneous group; generally speaking, there are three different groups who are affected differently:

Designers and developers who do standards-based work, avoid browser-targeting hacks and, when faced with an IE problem that can’t be worked around any other way, use the reliable technique of conditional comments instead of exploiting potentially-fleeting parsing bugs. Designers and developers who do standards-based work but do use browser-targeting hacks. Designers and developers who do not do standards-based work.

Next we have to consider two questions for each group:

What impact does using X-UA-Compatible have? What impact does not using X-UA-Compatible have?

Let’s start with the first group: standards-based designers and developers who don’t use browser-targeting hacks. Since this group already has a reliable way to work around issues in specific versions of IE without relying on bugs in the browser, making use of X-UA-Compatible offers them no net benefit, and a slight loss in increased work to ensure that it’s used properly in each page served.

However, not using it represents an enormous loss: since the default behavior, as currently specified, is to emulate IE 7, not using X-UA-Compatible negates the advantages of progressive enhancement by causing future versions of IE not to take advantage of enhancements even when they are capable of doing so. At least, that’s how it will apparently work; I’m not really sure how it could be any other way, though, because — for example — a fully IE7-compatible emulation which still enables IE8 features doesn’t really seem possible.

And note that the potential loophole of HTML 5 — which, again, will allegedly trigger a genuine “standards mode” without use of X-UA-Compatible — does not mitigate this: HTML 5 is still a long way from being a reliably-supported standard, and as such is unlikely to offer a solution in time for IE 8. Additionally, existing standards-based sites which cannot or do not want to transition immediately to HTML 5 will not be able to use this loophole.

So for the first group, X-UA-Compatible represents a net loss either way: they receive no benefit, and the only choice this group has is how much of a loss they want to take.

The second group — standards-based designers and developers who do use browser-targeting hacks — similarly face a net loss: if they do not directly use X-UA-Compatible , they will be able to rely on the default behavior emulating existing browser bugs, which is a gain, but they also suffer the loss of progressive enhancement. If they do directly use X-UA-Compatible , they’re in the same situation. The only way they can get progressive enhancement — or any similar access to new features — is to stop relying on specific bugs, in which case they move into the first group and still have a net loss.

The third group — designers and developers who do not do standards-based work — do not receive any benefit from X-UA-Compatible one way or another, because they were already in quirks mode and — presumably, given the sudden focus on not “breaking the Web” — quirks mode is unlikely to develop any backwards incompatibilities. They do face the same loss in terms of ability to take advantage of new features in a compatible fashion, but this group was not using and is not likely to want to use that ability. So they effectively break even.

So from the perspective of a designer or developer, the best case is breaking even. From the perspective of a designer or developer who does any form of standards-based work, the best case is a net loss.

The only available conclusion from this analysis is that the existence of X-UA-Compatible will be an overall negative for developers and designers.

The unsolved problem

Additionally, I believe that X-UA-Compatible will not and cannot solve the problem Microsoft claims to be facing; allegedly, it gets around the limitations of DOCTYPE switching by giving IE a reliable way to handle sites which, though they use a DOCTYPE that should trigger standards mode, rely on bugs in an existing version of IE and so will no longer render properly if a future IE standards mode fixes those bugs. Emulating historical rendering based on the X-UA-Compatible header and defaulting to emulating the current version of IE when it is not present is, we are told, the solution that works going forward, by allowing existing sites to continue rendering properly and allowing future sites to take advantage of an improved standards mode which will be present in future versions of IE.

Except for the fatal flaw.

If there were and are people building sites that worked in IE 6 but broke in IE 7 due to reliance on IE 6 bugs, and if there are people now building sites which work in IE 7 but which will break in IE 8 due to reliance on IE 7 bugs, then the only reasonable thing to conclude is that there will be people who will build sites that work in IE 8 but break in IE 9 due to reliance on IE 8 bugs, and who will build sites that work in IE 9 but break in IE 10, and so on.

Attempting to counter this by pointing out that such people will now be able to take advantage of X-UA-Compatible cannot work. IE has supported version targeting, via conditional comments, since the last millennium, and these people apparently have yet to properly take advantage of that feature; if they had, there would be no perceived need for X-UA-Compatible . Thus they will not properly take advantage of X-UA-Compatible either.

To see why this conclusion is inevitable, consider HTML 5 which will, we are told, trigger a genuine standards mode in IE 8 even when the X-UA-Compatible header is not present. All it will take is a few bugs in IE 8’s implementation, and then they’ll be right back in the same situation: if IE 9 fixes the bugs, it will also “break the Web” for anyone who relied on them. At that point will Microsoft start versioning the version targeting? Perhaps they’ll introduce an X-IE9-UA-Compatible defaulting to emulating IE 8, followed by an X-IE10-UA-Compatible defaulting to emulating IE 9, and so on into the future?

It is sometimes quipped that madness consists of repeating the same mistake over and over while somehow expecting to get a different result. In that sense of the word, X-UA-Compatible may very well represent madness. At any rate, it does not represent a solution to the problem Microsoft claims to be worrying about.

Where to go from here

What we’re left with, then, is the conclusion that X-UA-Compatible , if actually implemented, wouldn’t be a win for anybody: not for designers or developers and not for Microsoft. I speculated the other day that corporate customers’ response to Windows Vista, and worries about losing IE’s last remaining stronghold if those customers migrate their intranet applications away from the usual IE-only stance, seemed a likely cause for the IE team’s sudden bout of backwards-compatibility fever. Having thought through the implications of X-UA-Compatible a bit more thoroughly, I no longer think it offers those customers anything tangible — most of them are probably running quirks-mode abominations anyway — but I still think that’s a reasonable explanation for its announcement.

That it throws a monkey wrench into standards-based development at a time when IE desperately needs to play catch-up on that front is tempting fodder for a conspiracy theory, but ultimately I think the explanation is pretty mundane: Microsoft knows that it’s losing browser share and that its corporate customers are unhappy, so it’s providing a soothing bit of marketing fluff to calm the waters and buy time. Molly Holzschlag’s account of the process which led up to this lends credence to that explanation; the level of corporate idiocy apparently involved in Microsoft’s decision-making process seems to rule out the conspiracy angle because, if nothing else, an effective conspiracy requires the left hand to know what the right hand is doing.

About the only other explanation I can come up with is an attempt to harness the old fire and motion strategy again by forcing the rest of the market to react to what Microsoft is doing, but if that’s the case it’s backfiring. The response on the WebKit blog shows just how much things have changed; it’s almost snarky in its statement that “we haven’t really experienced the same problem”, and comes to a conclusion that would have been unthinkable for an alternative browser a few years ago:

[W]e don’t see a great need to implement version targeting in Safari. We think maintaining multiple versions of the engine would have many downsides for us and little upside. The IE team is, of course, under different constraints and free to make their own choices.

The days when the world trembled at sudden strategic announcements from Redmond have gone; the response today is “OK, you just go ahead and do that.” Which gets me back to where I was last week:

Microsoft is no longer relevant to the future of web standards: their only two choices right now are to play a long and expensive game of catch-up while under fire, or consign IE to the dustbin of legacy software.