Let's say that you're Intel, and you spent $5.5 billion in capital expenditures in 2007, much of it on the 45nm transition, and all of it for the purpose of beating rivals at delivering performance-per-watt increases across a range of market segments that spans the computing spectrum from servers to ultraportable devices. What, then, are you supposed to think about Web 2.0, the resurgence of the thin client model, and the popular "cloud computing" notion that users should be able to do almost all of their work and play with nothing but a simple Web browser (maybe running on an ARM-powered web tablet)? Judging by the comments of some of the Intel folks in this past week's Directions Symposium, the chipmaker thinks it stinks.

When we invited the engineers from Intel's Emerging Compute Models group to participate in the Symposium on Monday, I had no idea that they were going to start out by putting the browser-based thin client model in the middle of the floor, and then standing around it in a circle kicking the living daylights out of it. (Okay, that's a bit of an overstatement, but it wasn't pretty.) Check out the following excerpt from one post, for example:

I have also not lost sight of the fact that we (Intel) folk have several axes to grind around the thick versus thin client models and may err to the side of thick in many circumstances (duh). Still, in working in this alternative compute area for over 3 years, I wanted to address some myths. I leave my rear end exposed to the slings and arrows of outraged readers... Myth 2. Web 2.0 will cure the common cold

If you work for Google or Salesforce.com this is the mantra. In actuality both of these companies are moving from a strictly web-based client to a hybrid model that will provide better experience for the user. This gives the appearance that perhaps harnessing some level of compute, graphics and mobility MIGHT just be a good idea. Adobe and others are now building development environment environments (AIR) that will allow this to happen more easily. So, this pristine Web 2.0 model that was completely backend based is becoming hybridized through the use of local compute resources. Imagine that! Using a PC to do something more than KVM!

Another Intel post asked about "Generation Y" workers' likely attitudes toward thin clients as they begin to enter the workforce. The question is whether a generation that grew up with the Internet will be happy with only the Internet, or if they'll want more flexibility (i.e., a "fatter" client). And in another, an Intel engineer ran the numbers (average power for thin vs fat clients, average usage and standby hours, number of thin clients/server, etc.) and suggested that, though the thin client plus server model currently enjoys a slight power savings advantage over a network of rich clients, this delta will shrink because of power-saving client technologies like Atom.

Even though we're changing the main Symposium topic on Monday from "emerging compute models" to vPro (with a focus on client manageability and security issues), I can't let this subject wind down without pointing out that there's actually a fascinating "big picture" context to the this past week's discussion that merits a lot more attention than I've yet to see it given.

Load balancing on the power grid

So what beef does Intel really have against the current Web 2.0, "datacenter plus thin client" model? I think the answer is a bit more nuanced than the simple "they spent $5.5 billion in capex last year and don't want to see the desktop and mobile processor portion of it go to waste" explanation that I started this post with. The fundamental challenge that Intel faces in a computing market that's dominated by massive datacenters is that power constraints will, for the foreseeable future, limit the number of integrated transistors that you can pack into (and therefore sell into) a single datacenter. Even at current transistor counts, most datacenters have large empty regions in them because you just can't get enough power out of the grid to run enough transistors (and, to be fair, hard disks, which are a bigger problem) to fill a modern datacenter.

So datacenter floor space may be theoretically infinite, but the power grid acts as an external economic and physical constraint that places a hard limit on the growth of the market for transistors in any one local datacenter, and this depresses the global market for transistors that datacenters represent.

The solution, then, is to spread the load around by moving as many transistors as possible off of a single point on the power grid (the datacenter) and connect them to all of these other points on the power grid that still have plenty of spare capacity (homes, schools, offices).

This simple, power-optimization and load-balancing problem is why Intel has to keep selling rich clients. The datacenter market alone just doesn't have the capacity to absorb enough 45nm and 32nm transistors to make the massive investment in fabricating them worthwhile, even if you factor in a concomitant proliferation of (low-transistor-count) thin clients. So to have a robust market for transistors at this level of integration you have to keep selling more and more of them across a wider, more broadly distributed swath of the power grid.

App streaming: you can never be too rich, but you can be too thin

I've argued so far that Intel has a significant power-related demand-side market constraint in a datacenter-centric world, and that the company would like to continue selling fat clients (they use the more politically correct "rich clients"), but clearly this does not mean that Intel plans to miss out on the datacenter build-out. No, the company wants to have it both ways: lots of Intel inside datacenters, and lots of Intel rich clients connecting to those datacenters via Intel-made network hardware (both client-side hardware and infrastructure hardware like WiMAX).

But if you plan to sell rich clients and datacenter hardware, then you need a computing model that makes full use of both ends of the network pipe. Enter application and OS streaming.

The application and OS streaming models are relatively resource-intensive, and they're designed to be. These models, in which a full-sized OS and/or an application is streamed, part-by-part, over the network as it's loaded makes full use of the transistors on both the server and client sides, and it gives Intel the best of both worlds—they get to sell the back-end hardware and the client hardware, and all of that hardware is running the full, awesome bulk of legacy x86 software stack... which means ARM and PowerPC and MIPS, low-power client chips that are perfectly capable of running a browser and doing the Web 2.0 thing, are out now of the picture.

Is Intel barking up the right tree?

As something of a thin client skeptic myself, I'm not the best person to engage Intel on this issue, but our readers and featured experts have joined the fray in that regard. Of course, I'll note that I'm also not convinced that OS and app streaming are the future of the enterprise, either. Like most enterprise computing trends, the forces that will either advance the streaming model or halt it are all related to things like economics, manageability, security, and old-fashioned bureaucratic factors like inertia (i.e. legacy applications) and rear-end-covering (i.e. "the IBM decision").

So I don't know what the future holds in the thin vs. fat client race, but I do know this: if you're not following the dialogues the Directions Symposium, then you're missing out on some great insights by many of the folks (both Intel and non-Intel) who are making the world that enterprise customers will one day live in.