Nikolai Bezroukov. Portraits of Open Source Pioneers

For readers with high sensitivity to grammar errors access to this page is not recommended :-)

Chapter 4. Part 2:Selling Bazaar to Cathedral and Linux Gold Rush (1997-2000)

(1997)

Welcome to the hotel California

Such a lovely place...

Eagles

In 1997 Linus moved from Finland to a California startup Transmeta in order to experience the "extreme other side" of computer programming -- that of a high-tech start-up and as a side effect to become rich (his explicitly stated goal, and he definitely managed to achieve it in less than three years). Later in his interview to NYT he tried to avoid answering the question how "make money fast" craze affected him in Silicon Valley:

Since you moved to Silicon Valley from Finland in 1997, how has the region's aggressive approach to money-making affected you? Oh, how I hate that question. I've actually found the image of Silicon Valley as a hotbed of money-grubbing tech people to be pretty false, but maybe that's because the people I hang out with are all really engineers. They came here because this is where the action is. You go out for dinner, and all the tables are filled with engineers talking about things that won't be available to ''normal people'' for a few years. If ever.

In reality Linus decided to struck while the iron was hot, getting into the US "Internet IPO" boom before the bubble burst. Aided by mass "Internet boom" hysteria, Linus and many others – from books authors to snail oil salesmen – were happy to became rich from the greatest XX century's stock bubble. Bob Metcalf was the first who understood this transformation of the former revolutionary into a "make-money-fast", personal enrichment comes first type (pigs in Orwell terminology ;-). In his Feb 2000 column If open-source software is so much cooler, why isn't Transmeta getting it:

.. .Am I the only one to see that Torvalds and other open-source software revolutionaries are acting out the finale of George Orwell's Animal Farm? Orwell's farmhouse is full of open-source pigs, which are now almost indistinguishable from the proprietary humans they recently overthrew. It's true that I have been unkind to the "open sores" movement. But to be clear, anyone is welcome to beat Microsoft with better software, even a utopian community of volunteer programmers. May the best software win. And don't get me wrong, even if he disappoints Richard Stallman by not always referring to GNU/Linux, Torvalds is a genuine hero of the open-source revolution. But with Torvalds saying some animals are more equal than others, why is the sanctimonious open-source press still cheering him on? Are the likes of Slashdot.org, just gobbled by VA Linux, also porking out in Orwell's farmhouse? Torvalds wrote and now controls Linux, the open-source operating system, due this summer in Version 2.4. By day, he is a programmer at Transmeta. Transmeta just announced Crusoe, its low-power microprocessors for mobile computers. The architecture of Crusoe chips is based on VLIW (very long instruction words). It has "code morphing" to convert and cache software in speedy VLIW codes. And it comes with Mobile Linux, with Linux extensions for power management. According to Transmeta, Crusoe is two-thirds software and one-third hardware. So what I want to know is, if open-source software is so cool, and if Torvalds "gets it," why isn't Crusoe open source? For a start, why aren't the Crusoe chip's mask sources published for modification and manufacture by anyone? And yes, Mobile Linux is open source, but not the "code morphing" software Torvalds helped write. Transmeta has taken the phrase Code Morphing as its proprietary trademark. And what the code does, according to Transmeta, has been ... patented. Worse, Crusoe is touted for running Intel X86 software, and in particular, Microsoft Windows. Doesn't the open-source community say Windows is beneath contempt? Torvalds showed up at LinuxWorld Expo touting open source, of course, but then went on to revise two of its bedrock principles. Torvalds talked at LinuxWorld about fragmentation -- the emergence of too many Linux versions. Being old enough to have watched Unix fragment during the 1980s, I worry. But instead of holding to the party line that Linux will not fragment, Torvalds now says there is bad fragmentation and good. One can assume, because he's in charge of both, Transmeta's Mobile Linux will fragment Linux 2.4, but in a good way. Then Torvalds talked about commercial companies, which aren't so bad after all: Take for example Transmeta. His audience, packed with employees, friends, and family of newly public Linux companies, did not boo him back out into the barnyard. Where is the outrage? So just to keep Torvalds honest, I'm thinking that Crusoe chips, which are mostly software, should be open source and basically free. Chips have to be manufactured -- with white coats, ovens, and stuff -- so maybe it should be OK to sell open-source Crusoe for the cost of its silicon, trace metals, media, and manuals...

Since early 1996, shameless opportunists have sprung up across the country and across the Internet, ready to take advantage of the America newfound spirit of "webalization". While making money by fooling others is reprehensible, we can assume that many of these snake-oil salesmen like Bob Young and Larry Augustin were just gifted Ponzi scheme manipulators that just waited for the opportunity to strike gold from the fools and for whom this Internet boom was the last and only opportunity to become rich. They were not real sharks. The real sharks here were the investment banks and venture funds. And this period was the first when regulations adopted during Great Deal were substantially weaken and agencies that were formally responsible for control on financial oligarchy behavior were weakened and emasculated. Fed was in the hands of shameless opportunist Greenspan who under the pretext of free market and deregulation was selling the country to financial oligarchy. For the latter dot-com boom was a perfect opportunity to redistribute country wealth in their own favor. Greed is good was the slogan of the day.

This wealth redistribution mechanism worked via numerous venture and hedge funds that were created to attract money and extract rent for the casino owners which were investment banks. With skillful propaganda, money were just flowing like water into all sorts of "Internet funds" as well as the small and mostly unprofitable Internet start-ups. In 2011 in her article in NYT Evelyn Rusli and Verne Kopytoff gave the following assessment of this giant Ponzi scheme (Is It a New Tech Bubble Let's See if It Pops). Note the Goldman Sacks, JPMorgan Chase and Morgan Stanley were active players in creating and milking this Ponzi:

In 1998, Goldman Sachs Capital Partners, the bank's private equity arm, began a new, $2.8 billion fund largely geared toward Internet stocks. Before that fund, the group had made fewer than three dozen investments in the technology and communications sectors from 1992 to mid-1998, according to Goldman Sachs documents about the fund. But between 1999 and 2000, the new fund made 56 technology-related investments, of about $27 million on average. In aggregate, the fund made $1.7 billion in technology investments -- and lost about 40 percent of that after the bubble burst. (The group, which manages the money of pensions, sovereign wealth funds and other prominent clients, declined the opportunity to invest in Facebook early this year.) Philip A. Cooper, who in 1999 was head of a separate Goldman Sachs group that managed fund of funds and other investments, recalled that investors were clamoring, "We want more tech, we want more." Bowing to pressure, he created a $900 million technology-centric fund in 1999, and within eight weeks he had nearly $2 billion in orders. Despite the frenzy, he kept the cap at $900 million. "There was a lot of demand, but we couldn't see any way we could prudently put that much capital to work," said Mr. Cooper, who has since left Goldman. Other Wall Street firms, including JPMorgan Chase and Morgan Stanley, also made a number of small to midsize investments during the period. In 1999, for instance, Morgan Stanley joined Goldman Sachs and others in a $280 million investment in CarsDirect.com, which scrapped its initial plans to go public when the market deteriorated. "We thought we were going to double our money in just a couple of weeks," said Howard Lindzon, a hedge fund manager of Lindzon Capital Partners and former CarsDirect.com investor. "No one did any due diligence." Mr. Lindzon lost more than $200,000 on his investment. Also in 1999, Chase Capital Partners (which would later become part of JPMorgan Chase) invested in Kozmo.com -- an online delivery service that raised hundreds of millions in venture funding. JPMorgan Chase, which just recently raised $1.2 billion for a new technology fund, at the time called Kozmo.com "an essential resource to consumers." At its height, the company's sprawling network of orange bike messengers employed more than a thousand people. Less than two years later, it ceased operations. An online grocer, Webvan, was one of the most highly anticipated I.P.O.'s of the dot-com era. The business had raised nearly $1 billion in start-up capital from institutions like Softbank of Japan, Sequoia Capital and Goldman Sachs. Goldman, its lead underwriter, invested about $100 million. On its first day, investors cheered as Webvan's market value soared, rising 65 percent to about $8 billion at the close. Less than two years later, Webvan was bankrupt. About the same time, Internet-centric mutual funds burst onto the scene. From just a handful in early 1999, there were more than 40 by the following year. One fund, the Merrill Lynch Internet Strategies fund, made its debut in late March 2000 -- near the market's peak -- with $1.1 billion in assets. About one year later, the fund, with returns down about 70 percent, was closed and folded into another fund. "We all piled into things that were considered hot and sexy," said Paul Meeks, who was the fund's portfolio manager. Mr. Meeks started six tech funds for Merrill Lynch from 1998 to 2000.

The period of explosive commercialization of Linux startups and Torvalds can probably be considered as the towing figure, the banner of all Linux "get money fast" IPO schemes. As we mentioned before Torvalds never passed his rights to FSF: the practice that Stallman usually rigorously enforced. Therefore in some sense he from the beginning was a dissident in the camp of fanatic pro-GPL zealots, the leader of his own cult. He definitely does not like to be a follower of Stallman or anybody else and wanted to play his own game in order to became rich and famous.

And he probably was right about the "extreme other side" that he wanted to experience, but in a quite different meaning of this word. One thing that he experienced very soon is that he is no longer a free software developer. Actually he became more of a pawn in the hands of executives of major investment banks and venture firms as well as suits from Intel, IBM, and Oracle. And those people were able to explain to Linus what is good and what is bad for his beloved OS with a pretty convincing arguments like stock options (BTW Intel & IBM were the major and early investors in Red Hat and VA Linux), speech arrangements and other valuable perks.

And there were, of course, pretty funny things that waited for him when Linux became kind of "theater of absurd". For example, when Oracles' Ellison spent an hour in front of 1,000 programmers in San Francisco talking about the merits of free software. That's right. A man who became as rich as Croesus by selling his database software for thousands of dollars per CPU told this standing-room-only audience that their practice of sharing code for Linux (and charging nothing for it) was a good thing.

Moreover, Linus Torvalds' beloved Linux (and he himself) soon will be associated with the worst charlatans of the Internet bubble. The sad fact is that Linus did a lot of PR work to help to ensure that the record for an IPO's opening-day gain belongs to VA Linux. The latter shot up 700%, to $239.25 a share , when it hit the market on Dec. 9, 1999. Three year later, renamed VA Software, traded for about $1. Eliot Spitzer, the New York attorney general, later has exposed the way Wall Street analysts promoted lousy "Linux" companies to win additional investment banking business. And Linus Torvalds participation in this game of deception is undeniable fact: he willingly adopting the role of cheerleader.

First of all "corporate sponsors" made it clear that they need Linux on servers (IBM wanted to fight Sun and Microsoft, but probably more Sun than Microsoft; the same was true for Intel). Therefore, despite Linus previous convictions, Linux on desktop became No.2 task and from now on Linux will be developed mainly as a server OS. Talks about Linux on desktop will continue for next several years and interesting applications for Linux will emerge, but there is not much money on the desktop and it's money that (in a subtle way via Linux startups that are financed by Intel, IBM and other large companies) determine the actual direction of Linux development. To reuse old cliché of the Clinton campaign "it's money, stupid". Volunteers are welcome, but they does not matter.

Second, they made him a poster boy for both Linux commercial promotion and for anti-Microsoft campaign. Much later in his 1992 interview to BBC World Linus said that he is pretty happy to play this role:

"In a way it is fun. I'm pleased to be a poster boy. It gives me some self-importance," he said. He would not, however, want to become personally involved in the dispute with Microsoft. "I've tried to stay out of the Microsoft debate. If you start doing things because you hate others and want to screw them over the end result is bad," he said.

If the period when Linus controlled the direction of Linux development ever existed, it definitely come to end. Not only he became "a developer for hire" that barely understand the code of some subsystems that he incorporates into the kernel (see more about it in discussion of v.2.4 problems), he become "a hired gun" in an ultra-secret and ultra-closed organization. Yes, he still can make an important decisions and yes he still can optimize kernel the way he liked (and sometimes cutting chunks of necessary code that he considered "raw", but that was necessary for the future development of a particular subsystem). But what and when should included in the kernel is now a different story. In a sense he became a prisoner of his own project.

The selection of features from which he can chose is no longer controlled by him. And the first such feature that he previously objected against that that now became almost No. 1 priority was SMP. Another thing that he probably cannot predict at this time was that he and his beloved OS will become the core of several crazy Linux IPO that enriched few members of Linux community (including Linus Torvalds himself), but was a huge rip-off for many naive open source supporters. As we will see in 2001 he more and more looks like a guy on the former girlfriend wedding party.

As for Transmeta experience he probably can attest that the company was able to transform a former "free developer" into some mixture of a developer, a marketer and a ceremonial figure suitable for keynote addresses and product launches. And Transmeta was interested in Linus as a PR figure probably much more than in his technical abilities as a kernel developer.

Formally Linus became just a staff engineer in Transmeta to lead the development of Linux for a supersecret (at this time) Transmeta's Intel-compatible chip. But a far more important role that he was assigned was the role to chief PR person for Transmeta and here his journalistic upbringing was a huge asset for this startup that has nothing to do with Linux per se and from technical standpoint was more in the camp of VM developers than monolithic kernel developers. Transmeta probably gave him an offer that he cannot refuse as at this time he seems rejected several other excellent offers including one from Steve Jobs:

A lot of folks in Silicon Valley are so drunk on their own bath water that they simply don't get Linus. Take Steve Jobs. After Linus moved to the States in 1997, the acting Apple Computer CEO got in touch with him. Jobs wanted to persuade Linus to get involved in making the MacOS an open source code project. "He tried to get the Linux movement going more into the Apple area. I think he was surprised that his arguments, which were the Apple market share arguments--which would have made an impression on people who did this for commercial reasons--had absolutely no impact on me,'' Linus says.

BTW in his autobiography he mentions this episode that cast some light of his super sized ego struggle -- he really can do whatever it takes to avoid being in the shadow of Steve Jobs or other prominent developer:

According to Torvalds, Jobs assumed that he would be interested in joining Apple's mission to capture more of the personal computer market from Microsoft, rather than continue concentrating on Linux. "I don't think Jobs realized that Linux would potentially have more users than Apple, although it's a very different user base."

In Transmeta he was the one and only Linus Torvalds. The start-up was headed by Dave Ditzel, former chief scientist of the chip development project at Sun that produced the SPARC processors, one of the most successful RISC CPUs. But Ditzel was known only in very narrow circles of CPU developers. Microsoft cofounder Paul Allen was the major investors in Transmeta -- so here Linux symbolically again returned to his MS DOS roots and made an important step of becoming Microsoft of Unix :-)

Starting salary of Linus in Transmeta as well as the number of share he would get in the eventual IPO were closely guarded secrets. Linus claims that he negotiated that he will have enough time to continue to supervise the development of Linux kernel, but again reality was that he was really working like Transmeta PR person driving attention to the company. Just think about the number of interview and speeches he gave during the 1998 and 1999 before the launch of the first Transmeta chips. PR work requires a lot of time. Moreover in any case the size of the project already has outgrown his capacity to manage it.

In 1998 when I wrote the first draft of this chapter the Transmeta move to hire Torvalds looked like a very clever (I would like to call it innovative) marketing plot, that can be called "cloak-and-dagger marketing" -- officially Torvalds work on a some "top secret" project there. Transmeta's windows were blacked out, its Web site contained one sentence ("This Web page is not here yet"). All they need was one celebrity to provoke heated speculations about just what Transmeta were doing and this particular celebrity was doing there. In 2003 as SCO lawsuit heated and started threatening Transmeta, he was promptly shipped to the OSDL (As Shakespeare aptly formulated this idea in Otello "Moor did his job, Moor can leave.") :

The non-profit Open Source Development Lab (OSDL) and chip-maker Transmeta Corp. jointly announced today that Linus Torvalds, the creator of Linux, will join OSDL as the first OSDL Fellow. Torvalds will join OSDL on leave from Transmeta Corporation, where he is currently a Transmeta Fellow.

Most of the talk before Transmeta IPO was focused on the next-generation secret chip that Transmeta is developing. The main ingredient of this hugely successful PR trick was of course Torvalds, who play the role of celebrity who fueled huge interest in Transmeta interest in "Linux media" and even managed to get substantial attention from mainstream media. This PR trick would probably would not work for Transmeta without him, that's why in 1998 I, like many other observers, (incorrectly) assumed that there were some gentleman agreement, that Torvalds will work just as PR person for company by promoting Linux and other than that he is free to do whatever he wish. This proved to be partially wrong -- actually along with PR duties Linus did Linux port to the Transmeta chip.

In any case, for Linus this was not only the official end of volunteer period but also definitely "end of fun". Welcome to the hotel California. Now it's all about money.

While he got an unusually high salary in Transmeta, strategically this move was a very risky step from several points of view. First of all, leaving university grounds cuts implicit, but extremely important support from friends and academic stuff that he enjoyed at the University of Helsinki, making his job of the "Chief configuration management specialist" of the kernel code much more difficult and forcing him to make more bad, "intuitive" decisions that badly affected Linux development. Also that made him more conservative: without a university he was essentially on his own and please remember that from the point of view of academic training he never managed to accomplished much. Just a MS degree. No other research projects, no articles, no participation in other important project like Multics was for Unix creators, nothing that might help him in the future to navigate the development of the kernel. Note that Ken Thompson and other core members of the Unix team has had an excellent school in Multics project before launching Unix. So deciding what technical path to chose became more difficult and that dramatically increased the conservatism, the concentration on the polishing of existing codebase and including only features that he cannot reject. That also made him very vulnerable to the corporate pressure.

Generally speaking, leaving university grounds is a very dangerous step for any freeware/open source project leader and I attribute much of Stallman's success to the fact that he managed to stay in MIT, even when MIT got tied of him and wanted him out ;-).

Gradually the question of the level of "Linux_ kernel_development/Linus_leadership" synergy became open. In this sense SCO lawsuit really helped Linux as it was a good time for Linus to divorce Transmeta and to join an independent body for the development of Linux kernel, which can survive even if Linus himself was overrun by a track. Probably the fact the Linus was quickly shipped to Open Source Labs to protect Transmeta from excessive liability was the most positive development that came out of this lawsuit.

I could only say that I have the benefit of having been exposed to the Raymondism and other silly ideas ten years ago, and having the benefit of thinking them through before thinking got clouded by pundits and free IPO money. For most people who have discarded orthodox Raymondism, the idea seems so childish and shallow that there isn't much perceived value in convincing the people who are sold on it. allenjs@h... Yahoo Groups

In late 1997 Linux gold rush started and in early 1998 Linus found himself at the front-pages of such magazines as Forbes. In one year he probably gave more interviews and speeches that in all his previous life. It's difficult to say how he reacted to all this stream of media hype and exaggerations, but it definitely increased the troubling "cult of personality" problems in the movement that I mentioned before. It's not an easy job to be a media darling. Despite his usual "political correctness" the burden of fame was probably too much for him and Linus managed to make some revealing his personality statements (see Linus 1998 for more complete context of the statements; full interviews are also available online).

One of the main negative factors that usually cautious and realistic he gradually was also affected by "Linux uber alles" movement headed by Eric Raymond. I discussed this movement (Raymondism) in more details in my Bad Linux advocacy FAQ):

...first he proclaimed that he has no competitors in the Unix market anymore (assuming that Linux works better that Solaris 7 on 64 CPU systems -- BTW in late 1998 Linus abruptly changed his technical priorities and despite proclaimed desktop orientation started working hard on adding SMP support to the Linux kernel. In other words Linus adopted "One Microsoft way" technical policy -- much like Microsoft with NT, he is trying to make Linux a universal OS that dominates both desktop and server. It's really funny that despite the fact that both Solaris and Free/OpenBSD were still superior in certain aspects of kernel design he claimed just opposite: " I'm no longer looking at the Unix market when I'm looking for competitors, " he said, adding that when looking for new features to add to the free OS, he is looking more to Microsoft than to high-end Unixes like Solaris or Digital Unix. "I've been much more focused on the Windows NT and 98 target group as a market"

(assuming that Linux works better that Solaris 7 on 64 CPU systems -- BTW in late 1998 Linus abruptly changed his technical priorities and despite proclaimed desktop orientation started working hard on adding SMP support to the Linux kernel. In other words Linus adopted "One Microsoft way" technical policy -- much like Microsoft with NT, he is trying to make Linux a universal OS that dominates both desktop and server. It's really funny that despite the fact that both Solaris and Free/OpenBSD were still superior in certain aspects of kernel design he claimed just opposite: ...in another interview he predicted disappearance of Apple from the face of Earth in a couple of years (and also misunderstanding the power of Microsoft legal machine -- the danger of infringement of copyright lawsuits -- whatever you think about it, realistically this is a very powerful threat to free software movement, as SCO lawsuit later have shown). In fact, Apple essentially killed Linux on desktop with the advent of OS X (see below):

I actually think that within a few years, Apple will cease to exist simply because, my personal opinion is, it's too hard to compete against Microsoft in the commercial marketplace. And yet I feel that Linux can actually succeed because we aren't really competing against Microsoft. Linux doesn't have the same commercial pressures that Apple does have.

...in yet another interview he gives his final and categorical judgment about Java ("I think everybody hates Java as a desktop thing") forgetting the fact that he never programmed in the language and after programming kernel in C it's very difficult to switch to something else, anyway ;-) :

SW: What are your thoughts on Java? LT: I think everybody hates Java as a desktop thing. I see Java mentioned a lot lately, but all of the mentions within the last year have been of Java as a server language, not as a desktop language. If you go back a year and a half, everybody was talking about Java on the desktop. They aren't anymore. It's dead. And once you're dead on the desktop, my personal opinion is you're dead. If servers are everything you have, just forget it. Why do you think Sun, HP -- everybody -- is nervous about Microsoft? It's not because they make great servers. It's because they control the desktop. Once you control the desktop, you control the servers. It's no longer something that will revolutionize the industry. It could have revolutionized the industry if it was on the desktop, but I don't see that happening anymore. I hope I'm wrong. Really. I just don't think I am. SW: How did Sun blow it? LT: Too much noise, too much talk, too much discussion, not enough "show me."

...and in yet another he admits that "he really like the Unix as a philosophy" (not the fact that he just implemented a clone of the Unix kernel that already has standards, books and even complete source code available) and ironically (referring to Windows) gave a very good characterization of Linux development "nobody tried to design windows -- it just grew in random directions without any kind of thought behind it."

boot: Linux is based on UNIX, right? Torvalds: Well it's based on UNIX in the sense that I was used to UNIX and I really liked it. UNIX has a philosophy, it has 25 years of history behind it, and most importantly, it has a clean core. It strives for something—some kind of beauty. And that's really what struck me as a programmer. Operating systems that normal home users are used to, such as DOS and Windows, didn't have any way of life. Nobody tried to design Windows—it just grew in random directions without any kind of thought behind it.

...in early 1999 interview Linus first time used the word "invented" to Linux -- the word completely inapplicable to the clone OS and very close to the Microsoft marketing style (as everybody knows Microsoft "invented Windows" by a blatant rip-off but then intelligent enhancement and independent development of the Mac GUI ;-). As always he failed to mention GNU project (moreover, this time he even forgot to mention that the adoption of GPL license was the smartest thing he ever did and probably at this time he really has some reservations -- see KDE jihad ):

PC Week: Give us the short history of Linux's development. Torvalds: Basically, I invented it eight years ago, almost exactly eight years ago. It started small, not even an operating system. It was just a personal project. I just was doing something fun with my new machine. It kind of evolved through luck and happenstance into an OS, simply because there was very much a void where there wasn't much choice for someone like me. I couldn't afford some of the commercial OSes and I didn't want to run DOS or Windows -- I don't even know, did Windows really exist then? PC Week: You could have copyrighted Linux and made a fortune. Why did you make it an open source code operating system, and will that model work in the future as Linux acceptance grows? Torvalds: It started out as a personal belief that, yes, open source was needed. Then, when it got large enough, I encouraged people to license their own development, their own parts. Now there are multiple owners sharing all these licenses...

In his autobiography he managed to describe Mach kernel and Apple OS X in less than flattering terms:

"Frankly, I think it's a piece of crap," Torvalds says of Mach, the microkernel on which Apple's new operating system is based. "It contains all the design mistakes you can make, and manages to even make up a few of its own." Torvald's comments promise to upset not just Apple fanatics, but also some quarters of the free software movement. The Mach microkernel is also being used as the core of Hurd, a kernel project from the Free Software Foundation that will be an alternative to Linux as the heart of the GNU (Gnu's Not Unix) operating system, originally devised by free software advocate Richard Stallman. The criticism comes in a chapter where Torvalds tells that, on arrival in Silicon Valley in early 1997, Apple's charismatic chief executive Steve Jobs invited him to join Apple and help develop OS X. He says that Jobs was also keen for him to help attract open source developers to the project. The remarks will particularly sting Apple, because the company has made great play of the fact that the core of its new operating system is, like Linux, based on the Unix operating system and was developed on open source software.

Qualified criticism is a gift. If Linus says that a particular approach has its problems, great. Mach 3 Microkernel has a lot of problems, but there is no perfect approach to the kernel design. For each and every Mach 3 problem one can find one in monolithic kernel design advocated by Linus. And if you are a specialist that defends monolithic kernel as a better alternative, you better provide solid arguments, not the all encompassing word "crap" in the best "Father Linus, the greatest kernel designer of all times and nations" style. Actually even among monolithic kernels Linux kernel is not the best in many respects. BSD was and still is a better system in the networking department. If you need a firewall, router, maybe even a web server it will probably run better on BSD. That makes me wonder at his motivations and it lowers my respect for Linus. And definitely reminds me of the old Tanenbaum vs. Torvalds debate right back in the beginning of '92:

While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won. The only real argument for monolithic systems was performance, and there is now enough evidence showing that microkernel systems can be just as fast as monolithic systems ... that it is now all over but the shoutin`.

Although microkernels did not won, they did not disappeared either and the design has its merits and to a certain extent found its way into best commercial Unixes like AIX and embedded kernels like QNX. May be some sort of compromise is an optimal solution. The problem here is that he extended his (primitive) knowledge of the field circa 1992 to the situation in 2001. Linus has made his opinion of microkernels clearly in the past, when he was just a beginner kernel developer, a guy who can barely can program in C.

But both monolithic kernels of early XXI century and Mach kernels are quite different from what was the state of the art of 1992. And he should know that in the current Linux kernel "Bad driver equals kernel panic", not to mention the hassle for driver developers, who are forced to port their code against all kernel (endless) changes. My (limited) understanding is that QNX has been a quite successful microkernel architecture for years. Considering the critical real-time applications that QNX has to run, I am glad they got their micro-kernel right. Lets see... Nuclear reactors, nuclear subs, NASA uses it for some things, traffic control systems, etc. Very impressive list for such a crappy design.

And as the last peace of my collection I will reproduce his famous letter " Because I'm a bastard, and proud of it! " from Kernel Traffic #87. were he stated his position about kernel debuggers in the following way:

I don't like debuggers. Never have, probably never will. I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program. None of the arguments for a kernel debugger has touched me in the least. And trust me, over the years I've heard quite a lot of them. In the end, they tend to boil down to basically: it would be so much easier to do development, and we'd be able to add new things faster. And quite frankly, I don't care. I don't think kernel development should be "easy". I do not condone single-stepping through code to find the bug. I do not think that extra visibility into the system is necessarily a good thing. Apparently, if you follow the arguments, not having a kernel debugger leads to various maladies: you crash when something goes wrong, and you fsck and it takes forever and you get frustrated.

people have given up on Linux kernel programming because it's too hard and too time-consuming

it takes longer to create new features. And nobody has explained to me why these are _bad_ things. To me, it's not a bug, it's a feature. Not only is it documented, but it's _good_, so it obviously cannot be a bug. "Takes longer to create new features" - this one in particular is not a very strong argument for having a debugger. It's not as if lack of features or new code would be a problem for Linux, or, in fact, for the software industry as a whole. Quite the reverse. My biggest job is to say "no" to new features, not trying to find them. Oh. And sure, when things crash and you fsck and you didn't even get a clue about what went wrong, you get frustrated. Tough. There are two kinds of reactions to that: you start being careful, or you start whining about a kernel debugger. Quite frankly, I'd rather weed out the people who don't start being careful early rather than late. That sounds callous, and by God, it _is_ callous. But it's not the kind of "if you can't stand the heat, get out the the kitchen" kind of remark that some people take it for. No, it's something much more deeper: I'd rather not work with people who aren't careful. It's darwinism in software development. It's a cold, callous argument that says that there are two kinds of people, and I'd rather not work with the second kind. Live with it. I'm a bastard. I have absolutely no clue why people can ever think otherwise. Yet they do. People think I'm a nice guy, and the fact is that I'm a scheming, conniving bastard who doesn't care for any hurt feelings or lost hours of work if it just results in what I consider to be a better system. And I'm not just saying that. I'm really not a very nice person. I can say "I don't care" with a straight face, and really mean it. I happen to believe that not having a kernel debugger forces people to think about their problem on a different level than with a debugger. I think that without a debugger, you don't get into that mindset where you know how it behaves, and then you fix it from there. Without a debugger, you tend to think about problems another way. You want to understand things on a different _level_. It's partly "source vs binary", but it's more than that. It's not that you have to look at the sources (of course you have to - and any good debugger will make that _easy_). It's that you have to look at the level _above_ sources. At the meaning of things. Without a debugger, you basically have to go the next step: understand what the program does. Not just that particular line. And quite frankly, for most of the real problems (as opposed to the stupid bugs - of which there are many, as the latest crap with "truncate()" has shown us) a debugger doesn't much help. And the real problems are what I worry about. The rest is just details. It will get fixed eventually. I do realize that others disagree. And I'm not your Mom. You can use a kernel debugger if you want to, and I won't give you the cold shoulder because you have "sullied" yourself. But I'm not going to help you use one, and I would frankly prefer people not to use kernel debuggers that much. So I don't make it part of the standard distribution, and if the existing debuggers aren't very well known I won't shed a tear over it. Because I'm a bastard, and proud of it!

Linux and Innovation: Linus Torvalds and Newt Gingrich

See, you not only have to be a good coder

to create a system like Linux,

you have to be a sneaky bastard too ;-)

[Linus Torvalds in <4rikft$7g5@linux.cs.Helsinki.FI>]

The other interesting thing is that Raymondism equals Linux and open source with innovation and Linus often tries to play this card too. In fact Linux can be considered as a neo-conservative revolution (counterrevolution) against Microsoft. There is nothing very special about Linux kernel. It is absolutely, 100% true that Linus has complete control over the present development. He has in the past and will in the future nix excellent ideas that do not conform to his rather limited views. It has taken 10 years for Linux to get a semi decent VM which is still weak compared to the FreeBSD's VM. Linux kernel version 2.2 was worse than 2.0 at handling high demand. Until 2.2.18 or so, if a process used all the virtual memory, the kernel would randomly pick a process and kill it to free some memory. I think it checked the PID!=1, thank goodness. But this still means that Netscape or Midnight commander going nuts and eating all the memory could leave your box in a "fuzzy" state when the number and names of surviving applications and daemons cannot be predicted with any certainty.

In no way Linux kernel can be considered as an advancement of the state of the art in operating systems design, it's a reimplementation of preexisting (and really innovative) ideas of Unix. Yes good reimplementation has its own value and nobody can deny that Linux was an important part of Unix Renaissance. But is it an innovative OS? Or in a more narrow sense is it an innovative kernel? That's a joke. Unix introduced at least seven major innovations in OS design: C language as system programming language, hierarchical filesystem, pipes and a set of pipes friendly utilities/filters, regular expressions, devices as files, shell as the mother of all modern scripting languages, first built-in TCP/IP stack. Linux introduced none, zero to be exact (if we do not count the method of development praised so highly in CatB ;-). The most important part of open source development: the development of scripting languages such as Perl. PHP Python and Ruby as well as LAMP stack, two things that were really innovative about open source just happened to be in Linux (and not only in Linux; for example Perl predates Linux as the initial version was developed by Larry Wall in 1987 and Perl 3 was released in 1989 and Perl 4 in 1991)

If one compares Linux with BE OS, Inferno or even with OS/2 and Amiga one can see that in major design decisions Linux is a very conservative, backward OS. As Rob Pike noted in his "Systems Software Research is Irrelevant" (http://plan9.bell-labs.com/cm/cs/who/rob/utah2000.pdf) Linux can be considered as a sign that computer science research became irrelevant and he claimed that the whole situation in OS design is generally bad and requires action.

In a sense it's more like a powerful social movement with political overtones, very similar to Newt Gingrich "Contract with America" thing (fight corruption, waste in government spending, tax reform and a balanced budget, etc.) than a technological advance. There are other things that make analogy between Newt Gingrich and Linus Torvalds much less superficial than it looks from the first sight.

For example, both understand that the principal task of the leader is to define a positive "vision" around which followers can cohere, define strategies to achieve this vision, then delegate authority to others to carry out operations and tactics preserving key functions as the center of communication and final decision maker. That's why Torvalds so easily delegates programming of various parts of the kernel to other people. As long as he is in a controlling position of leader and configuration manager of the whole kernel who has the final word on what will be included in a particular version, it does not matter much who will produce a particular module or driver. Enjoy your ride as long as you agree that I am the driver.

Prior to becoming the House speaker, Newt Gingrich had spent over a decade writing and speaking about, and organizing around, an eclectic neo-conservative model, which he argued should take the place of Great Society liberalism. In Gingrich's view, his conservative predecessor, Ronald Reagan, had ultimately come up short as a transformative leader because his horizon as a leader had been defined by opposition to liberalism rather than a positive vision of a new order in American politics. "The great failure of the modern Republican party, including much of the Reagan administration," Gingrich wrote in 1984, "has been in its effort to solve problems operationally or tactically within the framework of the welfare state . . . . Real change occurs at the levels of vision and strategy." That's somewhat similar of Torvalds attitude to Stallman. Gingrich reiterated this theme in a 1995 interview:

" You have to blow down the old order in order to create the new order. Reagan understood how to blow down the old order, but wasn't exactly sure what the new order would be."

When Gingrich became speaker in 1995, his overriding goal was to succeed where President Ronald Reagan had failed, by creating a "new political order" in the United States. Sounds pretty similar with the "world domination" rhetoric of Linus Torvalds. Both share extraordinarily ambitions, executive-style understanding of political leadership, and an acute ability to work well with press. Both repeatedly demonstrated willingness to challenge conventional wisdom and take political risks to advance their goals (Linus decision to bless commercial distributors was very unconventional, to say the least, for a "member of a GNU generation ").

Being a chief configuration manager for a large and complex project is a very challenging, exhausting and thankless job. I suspect that now Linus really hates his job. But the survival under relentless pressure for a decade attests that Linus Torvalds has a really rare combination of top programming talent along with traits typical for gifted managers and skilled politicians. That's a very rare combination. In this respect he is a really outstanding person, outstanding without any reservations, because most people will quit such a job after the first several years.

His job requires not only talent and understanding of internals (that's easier for Linus Torvalds, than for anybody else because he saw the whole development for the very beginning and complexity for him increases gradually from a pretty low level), but the ability to work long-long hours almost without any vacations. And if you add Hollywood level popularity and corresponding time spend in the PR events this is not very enviable position. And unlike Hollywood stars he cannot resort to the alcohol, drags or to womanizing, or and combination of those three to reduce this pressure ;-).

The work under constant stress, under constant overload does not make life easy for anybody. First problem is that due to overload some decisions now should be taken based on intuition as there is no time (and sometime technical background) to understand the consequences of solutions proposed. The second and major problem is that managing of the project of the current size of the Linux kernel is probably just too much for any single person, no matter how talented he or she is. The first signs of this overload were evident already in 1998 a year after he was welcomed to the "Hotel California".

In April, 1998 the birth of Linus's second daughter caused great joy, and substantial disruption in kernel development as all work stopped and many patches got lost. Some grumbling results as it becomes clear just how dependent the entire process is on Linus.

In October 1998 tensions again exploded on Linux kernel after Linus dropped too many patches. Linus walked out in a huff and took a vacation. Things returned to normal, of course, but some people got talking. It becomes clear once again that the Linux kernel became to be too big for one person even to serve just as a configuration management specialist. Some ways of reducing the load on Linus were discussed, but nothing was really resolved.

Understandably the kernel development considerably slowed down also due to complexity of the current version of the kernel and SMP stuff (and here is it really difficult for a student with just MS to complete with IBM, HP or SUN laboratories staffed with a lot of PhD holders ;-). That inevitable slowdown was first predicted in now famous Halloween I memo that provided one of the first objective (objective attitude is our attitude to people we do not like ;-) evaluations of Linux. For example there was no other major open source implementation of SMP. And commercial implementations were complex and at this time closed. So while there was some space for Linux to catch Solaris and NT in SMP area, very little can be achieved by playing a plain-vanilla following somebody's tail lights. Here I would like to stress the importance of the academic environment for the OSS projects that was lost.

Not that I respect PhD holders for the diploma they have (a lot of PhD dissertation are just weak to the point of being a scam and IMHO worth much less than a single decent OSS program), but still I think that it's extremely beneficial for an OSS software developer to work for an academic institution, or at least be close to the university atmosphere. That was true for Denies Ritchie and Ken Thompson, who did their groundbreaking work on Unix at Bell Lab (the latter is as close to academic institution as one can get) and later at Berkeley (and please remember that in some areas Berkeley is more important contributor to Unix than Bell Lab). Same is true for Richard Stallman. Only Larry Wall looks like a good counterexample here (although later he joined a publisher O'Reilly as a free researcher, getting into essentially semi-academic atmosphere), but one can always switch from Perl to TCL or Python :-). And it's a common knowledge that large bureaucracies can have some niches that provide an almost academic freedom for several years.

Anyway kernel development in 1998 slowed down quite visibly. There was no new kernel version in the 1998 -- situation that would be unimaginable to the Linux crowd in 1992 or 1994 or even in 1996. Version 2.2 slipped to January 1999.

In late January, 1999 kernel version 2.2 was at last shipped -- somewhat buggy because of pressure to release the version. All of a sudden Torvalds declared the source code final, noting that, "enough is enough" and that "Every program has bugs, and I'm sure there are still bugs in this. Get over it -- we've done our best." To call it a stable kernel is like drawing an arbitrary line, but market pressure is market pressure and Torvalds' ability to withstand it under the current level of market hype is extremely limited. Essentially, like in the past, Linus Torvalds has no other choice, but to give what people want...

The version 2.2 was in some important areas a significant improvement over 2.0.x kernel but it was larger and need time to stabilize. In retrospect improvements also help to understand what was wrong with prev. version. For example, among other advantages FreeBSD had a big speed advantage over Linux 2.0.x when it came to NFS throughput, because kernel 2.0.x did not yet have kernel-space support for it (it was all user space, redundant copying -- a lot of overhead that was removed for Beowulf clustering). With Linux 2.2 and KNFS, I wonder if FreeBSD is still faster in this area.

Still the kernel itself was far from meeting enterprise standards. As Mark Russinovich wrote in his article Linux and the Enterprise (April, 1999):

Let me state clearly at the outset that I don't intend to bash Linux in this article, nor do I intend to proclaim NT's superiority to Linux. I base the information I present on a thorough study of Linux 2.2, including its source code. I hope that by revealing these problems, I will encourage Linux developers to focus on making Linux ready for the enterprise. I also want to dispel some of the hype that many Linux users have uncritically accepted, which has given them the false impression that Linux is ready for enterprise prime time. What Does "Enterprise-Ready" Mean?



Before an OS can compete in the enterprise, it must deliver performance levels on network server applications that rival or exceed the levels that other OSs achieve. Examples of network server applications include Web servers, database servers, and mail servers. OS and hardware vendors typically use results from industry-standard benchmarks such as Transaction Processing Council (TPC)-C, TPC-D, and Standard Performance Evaluation Corporation (SPEC) SpecWeb to measure proprietary OSs or hardware against other vendors' products. Vendors spend millions of dollars on benchmark laboratories in which they tune, tweak, and otherwise push state-of-the-art hardware and software technology to the limit. Enterprise customers, in search of a computing infrastructure that provides maximum capacity, often turn to benchmark results for guidance, and OS and hardware vendors holding benchmark leads take great pride in being king of the hill at any given time. Thus, competing in the enterprise means removing every performance impediment possible. Overlooking even the smallest drag will create an opening that a competitor can drive through to obtain a valuable lead on a benchmark. What complicates the science of engineering an OS for the enterprise is that an OS might have subtle design or implementation problems that don't adversely affect performance in casual desktop or workgroup environments. Yet these problems can keep the OS from achieving competitive results in an enterprise-class benchmarking environment. A typical enterprise-application benchmarking environment includes dozens of powerful multiprocessor computers sending requests as fast as they can over gigabit Ethernet to an 8-way server with 4GB of memory and hundreds of gigabytes of disk space. Efficient Request Processing



Network server applications typically communicate with clients via TCP or UDP. The server application has either a published or well-known port address on which it waits for incoming client requests. When the server establishes a connection with a client or receives a client request, the server must then process the request. When the server application is a Web server, the Web server has to parse the HTTP information in the request and send requested file data back to the client. A database server application must parse the client's database query and obtain the desired information from the database. For a network server application to scale, the application must use multiple kernel-mode threads to process client requests simultaneously on a multiprocessor's CPUs. The obvious way to make a server application take advantage of multiple threads is to program the application to create a large pool of threads when it initializes. A thread from the pool will process each incoming request or series of requests issued over the same client connection, so that each client request has a dedicated thread. This approach is easy to implement but suffers from several drawbacks. First, the server must know at the time it initializes what kind of client load it will be subject to, so that it can create the appropriate number of threads. Another drawback is that large numbers of threads (an enterprise environment can produce thousands of simultaneous client requests) can drain server resources significantly. Sometimes, resources might not be adequate to create all the threads the application wants to create. Furthermore, many threads actively processing requests force the server to divide CPU time among the threads. Managing the threads will consume precious processor time, and switching between competing threads introduces significant overhead. Because a one-thread-to-one-client-request model is inefficient in enterprise environments, server applications must be able to specify a small number of threads in order to divide among themselves the processing for a large number of client requests. Where this client-multiplexing capability is present, no one-to-one correspondence between a thread and a client request occurs. Neither does a one-to-one correspondence between a client request and a thread occur—one thread might share a client request's processing with several other threads. Several OS requirements are necessary for a client-multiplexing server design to be feasible. The first requirement is that a thread must be able to simultaneously wait for multiple events: the arrival of a new client request on a new client connection, and a new client request occurring on an existing client connection. For example, a Web server will keep multiple browser connections open and active while accepting new browser connections as multiple users access a Web site the server manages. Connections between a browser and the server can stay open for several seconds while large files transmit over a connection, or while the browser requests multiple files over the connection. The second requirement is that the threads must be able to issue asynchronous I/O requests. Asynchronous I/O is an OS-provided feature whereby a thread can initiate I/O and perform other work while the I/O is in progress—the thread can check the I/O result at a later time. For example, if a server thread wants to asynchronously read a file for a client request, the thread can start the read operation and wait for other client requests while the read is in progress. When the read completes, the system notifies a thread (not necessarily the thread that began the read operation) so that the thread can check the I/O's status (i.e., success or failure) and whether the I/O is complete. Without asynchronous I/O, a thread initiating an I/O operation must wait while the operation takes place. This synchronous I/O causes multiple-client-per-thread server designs to perform poorly. Because such server designs designate limited thread pools, taking threads out of commission to perform I/O can lead to a situation in which no threads are available to accept new client requests or connections. In such a case, a multiprocessor's CPUs might remain idle while client requests sit backlogged. Worse, the server might never have a chance to service client requests, because the client might stop waiting for the server. Figure 1 contrasts asynchronous and synchronous I/O. Linux and Request Processing



Unfortunately, Linux 2.2 doesn't satisfy either client-multiplexing server-design requirement: Linux 2.2 cannot efficiently wait for multiple events, and it doesn't support asynchronous I/O. Let's look more closely at each of these concerns. Linux provides only one general API to server applications that want to wait on multiple requests—the select API. Select is a UNIX system call that has been present in every UNIX release since the OS's initial development. Select is one of the OS interface functions that has become part of the POSIX standard for UNIX API compatibility. One reason that the Linux select implementation is not an acceptable function for waiting on multiple events is that the system uses select to notify all threads that are waiting on the same event whenever the event occurs (e.g., the arrival of a request from a new client). Notifying multiple threads in this way degrades server performance: Only one thread can handle the new request or connection, and the other notified threads must return to a state of waiting. In addition, synchronization causes overhead as the threads agree among themselves which one will service the request. Other secondary overhead results when the OS divides CPU time among the threads it has needlessly notified. This kind of limitation forces a network server application to designate only one thread to wait for new incoming client requests. This thread can either process the new request itself, waking up another thread to take over the role of waiting for new requests, or the original thread can hand the request off to a waiting thread. Both alternatives add overhead, because every time a new client request arrives, the waiting thread receives notification and must then notify another thread. If Linux provided some additional application support, the OS could wake up only one thread. For example, an application could specify that even though multiple threads are waiting for a particular event to occur, the application wants only one of the threads to receive notification for each occurrence of the event. NT provides such support for its waiting functions (NT server applications do not typically use select, although NT implements the select call for compatibility with the POSIX standard) to allow multiple threads to efficiently wait for incoming client requests. Select suffers another serious problem: It doesn't scale. A Linux application can use select to wait for up to 1024 client connections or request endpoints. However, when an application receives notification of an event, the select call must determine which event occurred, before reporting the event to the application. Select uses a linear search to determine the first triggered event in the set the application is waiting for. In a linear search, select checks events sequentially until it arrives at the event responsible for the notification. Furthermore, the network server application must go through a similar search to determine which event select reports. As the number of events a thread waits for grows, so does the overhead of these two searches. The resulting CPU cost can significantly degrade a server's performance in an enterprise environment. NT incorporates a unique feature known as completion ports to avoid the overhead of searching. A completion port represents a group of events. To wait on multiple events, a server associates the events with a completion port and waits for the completion port event. No hard upper limit exists on the number of events a server can associate with a completion port, and the server application need not search for which event occurred—when the server receives notification of a completion port event, the server also receives information about which event occurred. Similarly, the kernel doesn't perform searches, because the kernel knows which events the system associates with specific completion ports. Completion ports simplify the design and implementation of highly scalable server applications, and most enterprise-class NT network server applications use completion ports.

Even on desktop 2.2 still has a long way to go and still demonstrated sound stutters, jerkiness when moving windows and slow window manager performance (which is connected with weak support of interactive applications in the kernel).

Another consequence of this overload is that polishing of existing codebase became a top priority. Linux needed to compete with commercial systems in benchmarks and that requires polishing and tuning, polishing and tuning... That became pretty clear during so called Mindcraft fiasco that took place in the early 1999. This was Microsoft sponsored (and Mindcraft executed) test that showed that despite Raymondism claims, Linux 2.2 still has problems in the application area were it is most widely used -- as a web server. Here are some important results from the test:

With 4 CPUs and 1 Gig of RAM, NT & IIS achieved 4,166 http requests per second.

With 4 CPUs and 1 Gig of RAM, Linux & Apache achieved 1,842 http requests per second.

With 1 CPU and 256 MB RAM, NT & IIS achieved 1,863 http requests per second.

With 1 CPU and 256 MB RAM, Linux & Apache achieved 1,314 http requests per second.

It was natural to expect that Eric Raymond will fiercely defend his beloved operating system and his CatB fairy tail against any enemies, including truth. And the desire to see a rebuttal was instantly gratified (see Linux Today Eric S. Raymond -- Latest FUD tactic may be backfiring. readers posting are also very telling and can tell about Raymondism as phenomenon much more that my writings):

Microsoft's latest FUD (Fear, Uncertainty and Doubt) tactic may be backfiring. A 21 April ITWeb story [1] reported results by a benchmarking shop called Mindcraft that supposedly showed NT to be faster than Linux at SMB and Web service. The story also claimed that technical support for tuning the Linux system had been impossible to find. Previous independent benchmarks (such as [2]) have found Linux and other Unixes to be dramatically faster and more efficient than NT, and independent observers (beginning with a celebrated InfoWorld article in 1998 [3]) have lauded the Linux community's responsiveness to support problems. Linux fans smelled a rat somewhere (uttering responses typified by [4]), and amidst the ensuing storm of protest some interesting facts came to light,. 1. The benchmark had been paid for by Microsoft. The Mindcraft press release failed to mention this fact. 2. Mindcraft did in fact get a useful answer [5] to its request for help tuning the Linux system. But they did not answer the request for more information, neither did they follow the tuning suggestions given. Also, they forged the reply email address to conceal themselves -- the connection was made after the fact by a Usenetter who noticed that the unusual machine configuration described in the request exactly matched that of the test system in the Mindcraft results. 3. Red Hat, the Linux distributor Mindcraft says it asked for help, reports that it got one phone call from them on the installation-help line, which isn't supposed to answer post-installation questions about things like advanced server tuning. Evidently Mindcraft's efforts to get help tuning the system were feeble -- at best incompetent, at worst cynical gestures. 4. An entertainingly-written article [6] by the head of the development team for Samba (one of the key pieces of Linux software involved in the benchmark) described how Mindcraft could have done a better job of tuning. The article revealed that one of Mindcraft's Samba tweaks had the effect of slowing their Linux down quite drastically. 5. Another Usenet article [7] independently pointed out that Mindcraft had deliberately chosen a logging format that imposed a lot of overhead on Apache (the web sever used for the Linux tests). So far, so sordid -- a fairly standard tale of Microsoft paying to get exactly the FUD it wants from a nominally independent third party. But the story took a strange turn today (22 Apr) when Microsoft spokesperson Ian Hatton effectively admitted [8] that the test had been rigged! "A very highly-tuned NT server" Mr. Hatton said "was pitted against a very poorly tuned Linux server". He then attempted to spin the whole episode around by complaining that Microsoft and its PR company had received "malicious and obscene" email from Linux fans and slamming this supposed "unprofessionalism". One wonders if Hatton believes it would be "unprofessional" to address strong language to a burglar caught in the act of nipping the family silver. In any case, Microsoft's underhanded tactics seem (as with its clumsy "astroturf" campaign against the DOJ lawsuit) likely to come back to haunt it. The trade press had largely greeted the Mindcraft results with yawns and skepticism even before Hatton's admission. And it's hard to see how Microsoft will be able to credibly quote anti-Linux benchmarks in the future after this fiasco.

Despite numerous similar attempts by Raymondists (and some Linux developers including Alan Cox) to present this finding as a one-sided report and destroy Mindcraft's credibility by painting the test as a PR exercise performed by paid Microsoft puppets (attempts that most Linux developers regret later) Linus was faced with an urgent necessity to find its own game in TCP/IP stack improvement and SMP improvement. He understood more than anybody else that even if Linux was properly tuned, it still may not have matched NT on that hardware. The part of the test was run on a high-end SMP box, and at this time NT may well outperform Linux 2.2 on that sort of platform. SMP in the kernel 2.2 was not as good as Windows NT, to say nothing about Aix or Solaris. And this put him in an extremely difficult position of making some decisions of how to proceed without a real kernel development lab and without any development plan. That probably contributed to already substantial overload and played a role in a one year delay of 2.4 kernel. Here is how he recollected the events later in his December 2000 Linux Magazine interview:

MindCraft thing [MindCraft is an independent research laboratory that last year reported test results -- paid for by Microsoft -- indicating that Windows NT outperformed Linux in certain basic server tasks. ­Ed]. LM: How hard was that to deal with? LT: It was really personal for a few months. I took it fairly personal, especially the way they did it. LM: What happened? LT: Well, it was a panel discussion in Chicago and it was the first time I'd been on the floor at the same time as people from Microsoft. Five minutes before the panel started, the Microsoft guy handed out this paper that contained the results from the MindCraft study, and I didn't even have time to really see what it meant. So, when he actually took this up in the panel, it was hard for me to say anything. LM: But in the end, Microsoft was right, don't you think? LT: Microsoft was right. The point was that it actually gave us a much better baseline to compare what we were bad at. We'd probably been naive and thought that we were doing some things really well. Then having somebody do that comparison was very motivational. That was quite important. Everybody expected some kind of attack from Microsoft, so I think we'd been a bit arrogant in believing that there were so many benchmarks that we were so much better at than NT. It took a lot of people by surprise, including me. We really lost badly in that one. There was certainly that kind of naivete.

"My name is Linus, and I am your God"

- Linus Torvalds, Linux Expo, Durham NC, 1998

The keynote by the world's most idolized geek,

Linus Torvalds, was already

underway by the time

we rushed into the massive auditorium.

Paul Ferris -- A Date With Fearless Leader

Well oiled machine of Linux startup PR professionals, distributors and may be in pre-IPO period even high caliber PR people from largest investment banks created a very interesting public image of Linux Torvalds. The zeal to create a "positive sellable image" of the Father of Linux might be studied as an example of a very successful PR campaign in communication departments of Ivy League Universities. Even Linus Torvalds appearance on the fist page of Forbes probably was probably not that accidental.

All-in-all it created a cult of personally effect similar to a typical leader of a totalitarian party. Several journalists who tried to explore this "make money fast" possibility contributed to this trend too. And the results were pretty impressive even for Eastern Europeans who suffered from this melodramas for most of their life. Like in case of many religious cults and radical movements Linus became especially attractive as a role model for teenagers. Here are typical examples:

Name: Kenneth Kowalsky Location: Victoria BC - CANADA Occupation: Student

Linus is my idol. I am a 15 year old student. Programming and computer science is my life. I have used Linux for the past 3 years and found the subject of Linux to be highly interesting - not because it is Unix or the fact that it is highly stable - because it has started a revolution. If you compare Linux to MS Windows NT, Linux comes out on top. The reason is the open source factor, while windows is only worked on by a 1000 programmers (at most) Linux is worked on my millions. This is because Linux is open source, which allows anyone to patch up or add to the current design. I mean anyone, including you who are reading this note could go to kernel.org and get the tarball and within the hour be knee deep in source code. This gives Linux an advantage, the kind of advantage that only can be achieved with open source. I don’t mean to say that every instance of Linux on the planet will kick butt with windows. When Linux is poorly configured it can crash more than windows, but when properly configured it has ten times the power that any windows copy could every dream of. Linus is the person that I hope to be one day, don’t get me wrong. I don’t want to be the guy in charge of Linux, I don’t even really want to start a movement – but I do want one thing in common with Linus – I want to be able to give away something that will better the computing world. If you think about it, Linus gave away something that could have brought him millions – or even billions. Linux could have been the next Microsoft Windows. It takes an incredible person to do something like that – that is why I look up to Him. Name: CVRadhakrishna Location: Trivandrum-India Occupation: Scientist

Linus is to Computer Field as MK Gandhi was to public life. Mahatma Gandhi gave exemplary concepts like Satyagraha(passive resistance?) against oppression and Linus has given the Computer world Linux aginst monopolies, trademarks and patent dominance. Both gave freedom to the people, one at political and social level and another in the realm of Computers. Name: Faisal Halim Location: Abu Dhabi, U.A.E. Occupation: Student

A MODEL worth EMULATING! Mr. Linus Torvalds has proved to students a model worth emulating. Not only does he develop software based on the principles of software engineering, he actually does not bear a fanatic attitude towards software licences. Here, at Islamia English School, we, the boys of A Level I, are asked to be models for the younger boys to emulate. I think Mr. Linus is a man we can look up to emulate, besides our teachers. He has upheld the principles of the hacker culture, been well mannered in the face of companies violating the norms of the computer industry, a premise where many people would just start to flame, he gives importance to his appearance, he is capable of making presentations, and he has broken the dark mist that has gathered on hackerdom by bringing up a family.

What is really sad, the initial Linus (and Alan Cox) reaction to the Mindcraft fiasco might be an early warning of the tendency that is common to many other open source projects: initial developer instead being part of the solution eventually becomes a part of the problem. An identification with the project and the desire to keep the control of the project at some point starts to hurt the project future. Please remember that revolutions usually eat its own children and big software projects are not so different from revolutions.

But as for Linus "world dominance" goal: an attempt to play (and win) both desktop and server game were far from encouraging signs. This is continuation of tremendous overload, pure and simple. That's why version 2.2 lacked a journaling file system, which means that if fsck can't fix a disk after a power failure or system crash, and you were pretty much hosed and the most recent backups is your only chance, if any. That's not good for availability, but project leader is just a mortal human...

But question of succession and the organization of Linux Kernel lab has another dimension. If you are the leader of a big project you need to be open about succession because the project will definitely outlive the initial leader. Here Torvald's behavior looks really ambivalent and demonstrates his supersized ego. And that speaks trouble for Linux as a project and especially for the Linux companies like Red Hat or Caldera. As Russ Mitchell in his Business2 paper pointed out:

It would be natural to expect Torvalds to respond to questions about succession issues. But he's not talking. Busy developing software at computer chip maker Transmeta in Santa Clara, Calif., even as he continues to manage the Linux project, Torvalds has been keeping a low profile of late. His handlers rebuffed several invitations from Business 2.0 to talk about the Linux organization, the question of succession, and his own plans for the future. Although the media tend to trumpet Linux as a global collective of hundreds or thousands of programmers headed by Torvalds, the truth is a bit more complicated. In fact, Torvalds and a tight cabal of top-notch programmers that numbers fewer than a dozen do most of the heavy lifting. Torvalds and his small team are responsible for developing new iterations of Linux operating system software and for fixing bugs and otherwise maintaining the current iteration. While the group is barely known within the software industry, Torvalds' successor would emerge from this group. More than two dozen Linux insiders were polled to name the one individual most likely to take the baton from Torvalds, and one name consistently turned up: Alan Cox. Cox is a bush-bearded, long-haired Brit who lives in England and works for Red Hat. Cox is one of the "few people [Torvalds] trusts to make important decisions about future directions," says Peter Wayner, author of Free For All , a recent book on the history of Linux. Cox, he says, is "responsible for making sure that most of the new ideas that people suggest for Linux are considered carefully and integrated correctly." When Torvalds took his first vacation in eight years last summer (a mere two weeks), Cox ran the operation√and he did more than keep the wheels on the tracks: "While Linus was gone, Linux development became unblocked in certain ways," says Michael Tiemann, an open source pioneer who founded Cygnus Systems and now is chief technology officer at Red Hat. Tiemann says Cox made a number of decisions about Linux code that "broke some logjams" that had been hampering Linux development. Cox and Torvalds could hardly look less alike. Torvalds is a cherub-cheeked boy next door. Cox is the wild and wooly hippie freak. Don't fall for stereotypes, though√it turns out that Torvalds is the more loosely-goosey manager, according to Tiemann. "Linus is adamant that people do their own thing," he says. "His message [to the programmer] is that you should do what works for you. Cox says to them: "This is how it works best together, this is how your contribution can become more of a building block." Linus seems more of an anarchist. Alan seems more constructionist." .... .... ..... When a corporation is headed by a charismatic leader who won't spell out a clear path of succession, investors usually get spooked. Are Linux customers worried? There is little evidence at the moment. But they may be unaware of how loose and uncertain is the structure that Torvalds sits upon. They may be unaware that Linux's benevolent dictator has chosen not to reveal his succession plans√or even to say whether he has crafted any succession plan at all.

As I already mentioned in the "Raymondism" section there were some warning signs of common "cult of personality" disease in Linux development starting from early 1998. Work on Linux standards was (and is) marginal and is not supported by Linus as this undermines his power as the supreme technical guru. Documentation of internals was in really bad shape and Linus politely explained "he is bad at documentation" :-).

Starting from the kernel 2.2 based distributions some interoperability problems started to cause some concerns and papers start to appear about problems in this area. And the speed of development and overload make any fixes of this situation problematic, but in late 2000 the market conditions seems took care about minor Linux distribution players without any lab. Still there are questions to answered about maturity and rationality of the decision of going both for desktop and low-level servers and high-end computing using a single person an the supreme coordinator of all development efforts. And this particular question is very difficult to answer because of the personality issues involved.

For example, it's not clear how long Linus will be able positively contribute to the kernel development or, more importantly, be able productively coordinate increasingly complex and demanding configuration management job on both ends -- desktop and high-end server. That to a certain extent deprive opportunities for Alan Cox and several other major developers to influence strategic decisions in the development.

Currently Linus is still the focus point of all kernel development efforts, but with three daughters he is definitely not able to work 10 hours or more a day on this problem (10 hours a day six days a week looks like a bare minimum for a leader of such a project).

Yes I understand that he wants to achieve his financial goals but in such situation the extent to which he is still a positive force is unclear. May be its time for him to move his responsibilities as a kernel developer coordinator to some independent body like proposed Linux Kernel Development Lab financed by all commercial Linux distributors. For example now Linus is trying to coordinate symmetric multiprocessing support in the kernel without proper laboratory to test proposed solutions. And actually without any supporting stuff (unless there are some shadow support staff in Transmeta). He essentially controls very few areas of the kernel (probably less than 10% including memory management stuff and scheduler, both at the level far beyond competition and both are constant source of problems) so his main contribution now is a management of the development. It remains to be seen if he will be up to the task and will continue to play constructive role in the movement that largely outgrow the model of loosely coupled mob of Internet connected developers.

Here his Transmeta affiliation looks far from being a best solution. And I am not only talking about possible conflict of interest in a style "What is good for Transmeta is good for Linux". I believe that Larry Wall made a better decision when he went to work for O'Reilly. First, publishers are closer to academic institutions than most commercial organizations. Second, at certain point you need a real organization to help you in your efforts. From that point of view organizations like FreeBSD or even GNU with several core developers that are able physically contact each other have an advantage over current Linux infrastructure.

In any case in some 1999 interviews Linus looks exhausted to level of burnout and it is evident that he hate to produce another version of the kernel in 1999. In one interview he even suggested that it probably would be better if Linux returned to the status of his home hobby. This, of course, is now completely impossible, no matter how he want that or how exhausted he feels. And was forced his into another rat race to get kernel 2.4 out in 1999 -- the race that was lost.

Question of burnout is also important for other volunteer kernel developers as excessive load is the best way of killing any interest in participating in any volunteer project. I think that this is one of the main dangers as talented volunteer developers are the most valuable capital Linux has. With hired developers only (say Red Hat peoples exclusively) Linux is not much different from closed commercial OSes like Solaris. It's just more difficult to sell it profitably because "once GNU, forever GNU" factor.

From the other point of view if a program has a couple hundred thousand lines does it matter much if it is open sources or not -- open source at this level is just intermediate representation and other intellectual assets like internals documents, special tools for dealing with this level of complexity, etc. are significant barriers of entry. That's actually explain Linus unwillingness to spend any time documenting development. Netscape failure with opening of Communicator showed that large scale projects are a special beasts and the complexity of the project at mature stage is a powerful barrier of entry even to the most motivated and skilled developers. And please remember that before mid-seventies almost all IBM mainframe software products were open sourced...

Since approximately 1997 Linus probably became the most well known Finn on the planet. Now he became a media darling -- a very dangerous and rather time consuming occupation with not that much time to devote to kernel :-). And media is pretty cruel as it serves as a powerful amplifier of "cult f personality" disease. He now has dozens of pages and hundreds of papers devoted to him, including this chapter ;-). Among the most active builders of Linus cult I would like to mention Eric Raymond who lionize Linus in his famous " The Cathedral and the Bazaar" paper (here Eric Raymond motives are pretty clear -- he wanted to benefit from the cult; see my OSS page for details). Linux movement instantly became Open Source movement and where Linux and Open Source is concerned, hyperbole from the digerteratti hype meisters proliferates nearly as quickly as the hyperlinks. Here is one quote from LinuxWorld Today that is as close to the style of Soviet media depicting party functionaries as one can ever get:

Throughout the entire keynote the audience listened almost in awe to this man with the good humor, the technical genius, and the incredible people skills that have been able to harness the brilliance of the bazaar, those hundreds, thousands of developers on the Internet, so that they all pull together to move Linux forward, release by release, to the point where it is today. An entirely unlikely object of affection of nearly the entire computing industry.

And he himself proved to be as susceptible to this disease as communist bureaucrats. In early 1999 interview Linus first time used the word "invented" to Linux -- the word completely inapplicable to the clone OS and very close to the Microsoft marketing style (as everybody knows Microsoft "invented Windows" by a blatant rip-off of the Mac GUI ;-). As always he failed to mention GNU (moreover, this time he even forgot to mention that the adoption of GPL license was the smartest thing he ever did; probably at this time he already has some reservations -- see KDE jihad ):

PC Week: Give us the short history of Linux's development. Torvalds: Basically, I invented it eight years ago, almost exactly eight years ago. It started small, not even an operating system. It was just a personal project. I just was doing something fun with my new machine. It kind of evolved through luck and happenstance into an OS, simply because there was very much a void where there wasn't much choice for someone like me. I couldn't afford some of the commercial OSes and I didn't want to run DOS or Windows -- I don't even know, did Windows really exist then? PC Week: You could have copyrighted Linux and made a fortune. Why did you make it an open source code operating system, and will that model work in the future as Linux acceptance grows? Torvalds: It started out as a personal belief that, yes, open source was needed. Then, when it got large enough, I encouraged people to license their own development, their own parts. Now there are multiple owners sharing all these licenses...

"Father" Linus Torvalds is now the holy cardinal of the church of Linux. You can find more relevant quotes in the in the Webliography to the chapter. Anyway, this "superhero" status of Linus Torvalds remind me of communist empires with their First Secretaries determined to die "serving the country" :-). And, yes, the kernel can be a kind of mausoleum...

What I would like to stress is that Open Source is a very important avenue of software development that need to be supported, but it is unrealistic to consider it a panacea -- software development is really hard no matter open sourced or not. So if one wants to participate he/she should have no illusions about open source/free software. There is no royal way for accomplishing big software projects and even with luck one need to struggle/suffer a lot. That also means that the heroes of OSS development deserve their fame, despite all in-fighting and problems that OSS face. At the same time it's important to understand that none of them is or ever been an angel, that they were driven by (always) complex and (often) contradictory motives. We also need to understand that usually one of two central figures catch all the fame that should belong to many more people, with some never mentioned in the mainstream press. And that we do not need the communist-style cult of personality for OSS leaders, who IMHO are to certain extent "accidental leaders" of Linux cult.

The pressure was mounting against Linus excusive control of the kernel. For example in a story Is Linus Killing Linux published on Jan 26, 2001 at Techweb Paula Rooney questioned the status quo:

... some solution providers, vendors, and industry observers are beginning to question how long one man can steer the evolution of Linux, and whether Torvalds' sole oversight of the kernel, now at version 2.4, is slowing its corporate adoption. While he's not driven by profit motive, the engineer has significant power over the kernel: Linux is a registered trademark of Linus Torvalds himself. Windows, in contrast, is the trademark of Microsoft. They note Torvalds lacks formal accountability for Linux and, as a full-time engineer at chip maker Transmeta Corp. (stock: TMTA), has considerable professional obligations outside his open-source activities. What's more, industry titans such as IBM Corp. (stock: IBM); Compaq Computer Corp. (stock: CPQ); Intel Corp. (stock: INTC); Hewlett-Packard Corp. (stock: HWP); and Oracle Corp. (stock: ORCL) are pouring billions of dollars into developing Linux products and want to exert more influence on the direction of the kernel, based on customer feedback. "We need a full-time leader and a nonprofit organization that can be funded by IBM, Compaq, and Dell and the [Linux] distributors," said Hal Davison, owner and president of Davison Consulting, Sarasota, Fla. Some Linux solution providers view the constantly evolving process of the posting of Linux libraries, patches, and updates to the Internet as inefficient and cumbersome, Davison said. "VARs are reluctant because they don't see a clear channel. They don't see a Microsoft or strong corporate company saying, 'We're going to be here forever,'" he said. Torvalds opposes the notion of corporate interests controlling the destiny of the Linux kernel. However, experts say he'll face pressure from big OEMs and ISVs that are bankrolling the transformation of the technology into a lucrative industry. The Linux market stands to double this year to $4 billion, according to Deutsche Banc Alex. Brown, a Wall Street investment firm. OEMs are hopeful but leery about Torvalds' casual indifference to market needs and capitalist concerns. IBM's recent pledge to spend $1 billion to advance Linux commercially in 2001 comes with a no-strings-attached promise today, but observers say that won't last if Linux doesn't pick up steam in the form of revenue and profits. For example, at the LinuxWorld conference in New York, IBM plans to unveil new Linux initiatives and clients, including Shell Oil. "In the early stages of open source, it was more of a charitable affair and developers didn't attach a fee," said George Weiss, an analyst at Gartner. "But the vendors are in it for financial success, and they'll think twice about being charitable while answering to their stockholders." Publicly, blue-chip vendors recognize Torvalds as the lead Linux developer, but note that they aren't beholden to his final nod to carry out their product plans, as they are with Microsoft's Bill Gates. Still, insiders say Torvalds' casual e-mail sign-offs on the kernel carry tremendous weight in the commercial market and down the food chain from OEMs to ISVs, solution providers, and customers. For instance, when Torvalds declared Linux 2.4 finished several weeks ago, only Red Hat opted to ship an upgrade based on the "preproduction" Linux 2.4 kernel. Since then, Linux distributors have begun detailing their product deliverables based on the new kernel. "[Torvalds'] decisions are not ones you'd quickly throw out the window," said Bob Shimp, senior director of database marketing at Oracle, Redwood Shores, Calif., which contributed to Linux 2.4 development. "When he's ready to release the final version, that's when distributors package it up," Shimp said. "Having a little bit of control like that is a good thing. It all boils down to market forces." Despite Torvalds' technical reign over Linux, IBM and Compaq have quickly become the industry's de facto Linux leaders, and tensions over the kernel's direction will heighten as market forces intensify, experts say. "I don't believe open source works well for commercial companies because they can't control schedules," said Michael Cusumano, a professor at the Massachusetts Institute of Technology's Sloan School of Management who sits on the board of solution provider NetNumina Solutions. "Software companies try to have regular development cycles. That's how you build a rhythm for a company

"Stop winning about values and grab some cash"

I think a large part of the business models that Linux companies tries to adopt and for which they eventually paid a huge price are attributable to venture capitalists greed. With an explosive combinations of venture capital and investments banks, you simply throw some money at something fashionable in order to build something akin to a pyramid scheme in which you hope to sell off shares to gullible investors during IPO and get out before the pyramid collapses. And collapse partially is caused by excessive valuations and excessive zeal with which new public companies tried to expand their market to justify unrealistic evaluations they got during the IPO due to subtle manipulations of the underwriter banks. This can be considered as a new genre kind of Greek tragedies played on the sunny California soil :-). Ancient Gods were blood-thrusty...

Venture capitalists expect you to do you best to inflate the stock value at the after IPO period as this is a period where they plan to get (huge) return on their investment. And to please them you need to hire 250 or even 500 people even if you understand perfectly well that it might be a sustainable business for only less than 50. This is the reason why so many Linux companies are making a huge loss during their first year. Imagine a company with limited market and a small revenue which like sportsmen on steroids has millions of dollars injected into it to produce quick results no matter what. How do you spend the money without making a loss? Your revenue will not increase overnight and it might be impossible to grow in beyond a very modest sum. But you spend and spend the money on hiring extra (and unnecessary) people, advertising, marketing, partnerships, acquisitions.... Venture capitalists want you to make a quick buck so that they can get out with profit. Any how stock will behave after this point is not very interesting to them.

As Linux became fashionable there was a big temptation to make quick buck on Linux and that temptation started a wave of very questionable IPOs. A business is only able to survive if it makes a profit. But IPO itself can be a (questionable) business model, although not a sustainable one. And the motto of the movement for some time was changed to "Stop winning about values and grab some cash". Boosting ore IPO share price took precedence over creating real value and real software. Analysts shamelessly hyped the stock of companies with which their investment banks did business. It would be simply myopic not to see the Linux IPO story as a part of the techobable of the late 1990s with all accompanying it hype, greed and immorality. As we will see the post IPO scandals and suits scandal marks the end of an era and later with Enron fiasco raised the question of the strength of the fundamentals of the US markets: transparency, accountability and trust.

Actually a company is only able to direct its own course if it remains free from the the excessive influence of the capital markets. And that excessive influence is essentially on of the most important factors that dooms many developing country economies, to say nothing about small startups. That means that in a healthy financial climate financing of the business is provided via growth of revenue from sales, not from selling shares (we will for simplicity avoid tax-avoiding schemes when shares sold at discount constitute a major part of employee salaries). How much good can a company do to the open source software movement, and its owners and employees, within those constraints?

Linux startups history in general and RH in particular suggests some answers to this question. First on all let's state it frankly that from business perspective each additional Linux distributors dilutes the value of the other distributors franchises by diverting some potential customers. So there is no much love left among creators of different commercial Linux distributions.

The first largely unknown event in the commercialization of Linux was the selling of www.linux.com domain by its owner Fred van Kempen for undisclosed sum (rumored to be up to five millions) to VA Research (of future VA Linux IPO fame). That was a nice return on investment that proved that Linux is able quickly produce millionaires. And Fred van Kempen soon became just an interesting footnote in the history of crazy Linux IPOs.

In 1998 Red Hat became an undisputed leader among Linux distributors. It got several important investments including one from IBM (who also invested in other Linux distributors) and one from Intel as well as one from Novell (that was essentially a blow to Caldera).

The community developers are supposed to build Red Hat's product, while the certifications and vendor endorsements are held back for the high-priced "Red Hat Enterprise Linux" brand. With almost a hundred developers (including Alan Cox) it looked like Red Had might implicitly controls the direction of Linux kernel development by a mere share of brainpower it employs. In this sense by mere fact of existence of this Red Hat developers involved with solving of complex problem of running enterprise style applications like Oracle on the Linux kernel the role of Linus Torvalds was shrinking. Linux Torvalds cannot pay too much attention to the Oracle problems. One sign of this is that he no longer controls production versions of the kernel and in some cases it's possible to outsmart Linus putting into production version features that he might object.

But Linus Torvalds is much less democratic than he appears and still drive the configuration management of the kernel with an iron fist and that pretty much limits Red Hat possibilities, although no question Red Hat has its own proprietary kernel in his distributions. In late 2000 after Linus sold all Red Hat shares, he actually slammed RH about version of GNU compiler included into v. 7.0, but this was just an accident or as somebody cynically suggested in Linux Today "an implicit assumption of guilt for his Midas touch ;-)"

Red Hat's business model is by-and-large an outsourcing model (with "shared" free developers or "not so shared" part time developers producing the code that Red Hat sells). As there are less expensive distribution, revenue stream form selling the software is not a reliable revenue mode, Red Hat almost completely depends on its subscription-based revenue stream, so he need to close distributions as much as possible. Clones like Mandrake (that outsell Red Hat on the desktop) are (from the pure business perspective) annoying enough to justify some counteractions.

It's interesting to note that with the complexity of the latest versions of Linux, Red Hat managed to lock people into their distribution without breaking GPL license; the key is in how they put it all together. If you try to compile pine on a Red Hat 5.2 machine it won't run on Debian 2.1, although both distributions use glibc6. But if you compile it the Debian machine, it would ran on both platforms. Something about Red Hat's development environment prevented the Red Hat compiled binary from running on Debian. This power over little things, like where files should be placed, and which libc to use is a considerable power independently whether the product is GPLed or not ;-). This demonstrates that if a company develops a product on Red Hat, users may have trouble running that product on other platforms. At the same time, many companies feel that they must develop on Red Hat -- most of their customers will be using Red Hat.

Anyway, in late 1998 it looks like Red Hat was close to be perceived as the de facto owner of Linux. This is a perception created a solid preconditions for the successful IPO pyramid scheme. After Red Hat files for IPO many expected that it will 