Ten years ago around this very time—April through June 2008—our intrepid Microsoft guru Peter Bright evidently had an identity crisis. Could this lifelong PC user really have been pushed to the brink? Was he considering a switch to... Mac OS?!? While our staff hopefully enjoys a less stressful Memorial Day this year, throughout the weekend we're resurfacing this three part series that doubles as an existential operating system dilemma circa 2008. Part two ran on May 4, 2008, and it appears unedited below.

Last time, I described how Apple turned its failure to develop a modern OS into a great success. The purchase of NeXT gave Apple a buzzword-compliant OS with a healthy ecosystem of high-quality third-party applications. Meanwhile, Microsoft was lumbering along with Windows XP. Although technically sound, it was shot through with the decisions made more than a decade earlier for 16-bit Windows.

In 2001, when XP was released, this was not such a big deal. The first two or three versions of Mac OS X were troublesome, to say the least. Performance was weak, there were stability issues, and version 10.0 arguably wasn't even feature complete. It wasn't until early 2002 that Apple even made Mac OS X the default OS on new Macs; for the first few months of its life, XP was up against "Classic" Mac OS 9.

But OS X didn't stand still. Apple released a series of updates in quick succession, strengthening the platform with new features like Core Audio, Core Image, Core Data, and Quartz Extreme, and providing high-quality applications that exploited these abilities. All this time, XP itself stood still. The core Windows platform didn't change between 2001 and late 2006.

Although XP itself was essentially unchanged, Microsoft did try to produce a modern, appealing platform for future development. That platform was, of course, .NET, and observant readers will have noticed that I didn't mention it in part one. This was no accident, as the whole .NET story deserved a more thorough examination.

Microsoft attempts modernity

In 2002, Microsoft released the .NET Framework. The .NET Framework was brand spanking new. It was designed and implemented from the ground up. It could have been clean and consistent and orthogonal and with a clear design and powerful concepts. It could have been a way out of the quagmire that is Win32. It could have provided salvation—an environment free of 16-bit legacy decisions, with powerful APIs on a par with what Apple had developed.

It was certainly promoted as such. .NET was pushed as the future, the way all Windows development would occur in the future. The plans became quite aggressive; in the OS that was to succeed Windows XP, new functionality would be accessed not through Win32 but through .NET, meaning that any developer wanting to exploit the latest and greatest OS features would have to venture into this brave new world.

So .NET could have been a step into the 21st century. It could have been, but it wasn't. Technically, .NET was fine. The virtual machine infrastructure was pretty sound, the performance was reasonable, and C# was an adequate (if not exactly ground-breaking) language. But the library—the .NET "API" used for such diverse tasks as writing files, reading data from databases, sending information over a network, parsing XML, or creating a GUI—the library is another story altogether.

The library is extremely bad. It is simplistic and inflexible and in many ways quite limited. See, .NET has a big problem: its target audience. .NET was meant to be a unified platform that all developers would use—after all, if new OS features required .NET, a broad cross-section of developers would use it. The problem is that not all developers are created equal. By looking at the different kinds of developers out there, we can understand why .NET is the way it is. What follows is not an exhaustive taxonomy of all the weird and wonderful breeds of programmer, but rather a rough taxonomy of some of the key species.

A developer taxonomy

At one level, you have people who are basically business analysts; they're using Access or Excel or VB6 to write data analyzing/number crunching applications. These things are hugely important in the business world, totally unexciting to anyone else, and the people writing them aren't really "programmers." I mean, they are, in the sense that they're writing programs, but they're not especially interested in programming or anything like that. They don't really care about the quality of the libraries and tools they're using; they just want something simple enough that they can pick it up without too much difficulty. They'll never write the best code or the best programs in the world; they won't be elegant or well-structured or pretty to look at. But they'll work. Historically, as I said, these are the kind of people who Access is made for. Access is a great tool, quite unparalleled. Sure, it's a lousy database engine with a hideous programming language, but the power it gives these people is immense. So Access and VB6 and Excel macros are where it's at for these guys.

At the next level, you have the journeyman developers. Now these people aren't "business" people—they are proper programmers. But it's just a job, and they'll tend to stick with what they know rather than try to do something better. They might be a bit more discerning about their tools than the business types, but they're not going to go out of their way to pick up new skills and learn new things. They might use VB6 or Java or C# or whatever; it doesn't really matter to them, as they'll use whatever offers them the best employment opportunities at any given moment. Their code will probably look more or less the same no matter what. They're not going to learn the idioms of whatever specific language they're using, because there's no need, so it's just not for them.

A key feature of these developers is that, most of the time, they're writing "enterprise" software. This isn't software that will sit on a shelf in a store for someone to buy; it's custom applications to assist with some business process or other. Truth be told, it probably won't have to look very nice or work very well; it just has to get the job done. With "enterprise" software, you can often get away with a clunky program, because the people who are using it have all been trained on what to do. If doing X makes the application crash, that's okay—they can just be taught not to do X any more.

In spite of the often mediocre quality of the software these people write, they're a group that's immensely important to Microsoft. These programs are a key part of the platform lock-in that Microsoft craves. If a company has some business-critical custom application written in Visual Basic 6, that company isn't going to roll out Linux to its desktops; it's trapped on Windows.

At the final level, you have the conscientious developers. These are people who care about what they're doing. They might be writing business apps somewhere (although they probably hate it, unless they are on a team of like-minded individuals) but, probably more likely, they're writing programs in their own time. They want to learn about what's cool and new; they want to do the right thing on their platforms; they want to learn new techniques and better solutions to existing problems. They might be using unusual development platforms, or they might be using C++, but they'll be writing good code that's appropriate to their tools. They'll heed UI guidelines (and only break them when appropriate); they'll use new features that the platform has to offer; they'll push things to the limit. In a good way, of course.

In the pre-.NET world, this wasn't really a big problem. The first group used Excel macros and Access; the second group used Visual Basic 6, and the last group could use C++ or whatever beret-wearing funky scripting language was à la mode at the time. This all worked out fine, because one of the few nice things about Win32 is that it was designed for C. C is in many ways a very simple language, and it's also a ubiquitous language. As a consequence of this, pretty much every other programming language created in the last couple of decades can, one way or another, call C APIs.

".NET could have been a step into the 21st century. It could have been, but it wasn't."

.NET isn't like that. Although .NET can call C APIs (just like everything else can), the real objective is for all programming to reside in the .NET world. .NET is meant to be the entire platform, with all the different languages that people use living inside the .NET environment. This is why .NET has APIs for tasks like reading and writing files; in the .NET world you're not meant to use Win32 to do these things, you're meant to use .NET's facilities for doing them. It's still possible to use different languages with .NET (in fact, it's easier than it was in the pre-.NET days). Just now, the different languages all use the common set of .NET APIs for drawing windows on screen, or saving files, or querying databases, and so on.

Because everything now has to live "within" the .NET world, .NET has to be all things to all people. Well actually, that's not true. It's trying to be good enough for the first and second kind of programmer. The third type—well, just ignore them. They're too demanding anyway. They're the ones who care about their tools and get upset when an API is badly designed. They're the ones who notice the inconsistencies and omissions and gripe about them.

The .NET library is simple to the point of being totally dumbed down; it's probably okay for the first and second groups, not least because they don't know any better, but for the rest it's an exercise in frustration. This frustration is exacerbated when it's compared to .NET's big competitor, Java. Java is no panacea; it too is aiming roughly at the middle kind of developer, which is understandable, as they're the most numerous. But Java's much more high-minded. It's much stronger on concepts, making it easier to learn. Sun doesn't get it right the whole time, but the people behind Java have clearly made something of an effort.

One practical manifestation of this is that .NET reflects a lot of the bad decisions made in Win32. For example, .NET provides an API named Windows Forms for writing GUIs. Windows Forms is based heavily on the Win32 GUI APIs; the same GUI APIs that owe their design to Win16. To properly write Windows Forms programs, you need to know how Win32 works, because there are concepts from Win32 that make their presence felt in Windows Forms. In Win32, every window is related to a specific thread. There can be multiple windows that belong to a thread, but every window is owned by exactly one thread. Almost every action that updates a window in some way—moving it on-screen, changing some text, animating some graphics, anything like that—has to be performed within the thread that owns the window.

This restriction in itself is not entirely uncommon. There are very few truly multithreaded GUI APIs, because it tends to make programs more complicated for no real benefit. The problem lies in how .NET makes developers handle this restriction. There's a way to test whether an update to a window needs to be sent to the thread that actually owns the window or not, along with a mechanism for sending the update to the window's thread. Except this way doesn't always work. Under some situations, it can tell you that you're using the correct thread already even if you're not. If the program then carries on and tries to perform the update, it may succeed or it may hang or crash the application. The reason for this unhelpful behavior is the way Windows Forms depends so heavily on Win32.

These little issues are abundant. The .NET library does work. It more or less has all the main pieces you need, but it's full of areas where you have to deal, directly or indirectly, with the obsolescent mediocrity of Win32. On their own, none of these issues would be a show-stopper, but they all add up. It's a death of a thousand cuts. There are so many places where the Win32 underpinnings "shine through" and taint what should have been a brand-new platform.