The word "physicalization" is ten months old. This January, Rackable Systems launched a strange line of servers which defied all the conventional wisdom of server design by disaggregating larger servers into many smaller ones based on consumer parts, and in the process lowering power and performance density. Ars, among others, expressed skepticism over the strange design decisions, but launches from other major vendors suggest that, for some market segments, the server space is taking a turn in a novel new direction. Ars has covered this trend before, but we now take a more detailed look at it, with a survey of available physicalization offerings, analysis of the reasons for their adoption, and some predictions about the future.

The standard server: a bit unbalanced

Let's start by detailing the status quo a little. Conventional server products for datacenter use on the current market from major vendors are the same: a pair of x86 server-class processors from either Intel or AMD, and an appropriate complement of memory, with hard disks. This same product can be bought from almost every vendor, typically in 1U but also in 2U and various blade configurations. Double-density products fitting two such servers per rack unit are available from HP and SuperMicro, among others, pushing the density threshold to 16 cores per rack unit and power density above 25KW per cabinet under full load. We'll call this product, and its forthcoming descendants, the standard server.

The standard server is the product of a number of longstanding trends: increasing density, increasing core count, increasing power density, and increasing performance per watt. Some of these trends may have been taken too far; the power density of current Nehalem solutions is high enough to consume the power budgets of many datacenters in a fraction of their space, unnecessarily concentrating their cooling needs in a small area and leaving the rest of the center empty. These days, silicon and power are more expensive than concrete and steel, and some offerings don't adequately reflect this.

Meanwhile, the standard server is a comparatively unbalanced machine; lots of computing power, memory, and network bandwidth concentrated into a few nodes with relatively little storage bandwidth or IOPS capacity. Web serving and other database applications need little processor power to provide these functions, which can leave expensive Nehalem Xeon processors idling en masse. Even for compute-intensive applications, a single OS instance is no longer enough to effectively use all the resources of a modern Nehalem server.

For these reasons among others, virtualization has rocked the server space over the last few years, with end-users adopting virtualization techniques en masse to meet their various needs. In the Ars Server Room forum, virtualization skeptics are treated like creationist lunatics, so thoroughly has the relatively new technology proven adept, when properly used, of helping to resolve the resource problem above.

Our hypothetical database company, stricken with storage servers that have underutilized processing power, might virtualize instances of compute-intensive tasks onto these servers, allowing them to simultaneously fill a number of different roles, fluidly. In this way, virtualization has made itself essential to the efficient use of the standard server, and this trend will only continue over time.

At the same time as it introduces a cost in the form of relatively unbalanced performance for many workloads, and the additional abstraction and load-balancing work needed to rectify this, the standard server also traps the buyer in a sandboxed marketplace of very high margins. Consider the fact that the Xeon X5570 currently sells for almost three times the cost of a practically identical desktop processor, the Bloomfield Core i7 940. This difference can contribute upwards of one third the total cost of blade servers from large vendors, so anyone who can find a good way of using desktop and laptop components in the datacenter can make a great deal of money mining the margins.

Beyond processors, consumer hardware can offer other cost advantages, alleviating the need for server-class networking like Fibre, 10GigE, and Infiniband by harnessing the "free first gigabit" offered by GigE. SATA instead of SAS disks, and other such changes, can further lower costs.

At times in the past, it would have been lunacy to try to use consumer hardware for heavy datacenter workloads, because every available FLOP of per-node performance was dearly valuable. The market has changed, though. More and more applications are adopting a clustering, "scale-out" model, as software and network improvements better allow smaller nodes to cooperate at larger tasks. The same forces that drove high-end x86 processors like Xeon to compete with other microarchitectures are now allowing other x86 processors to compete with Xeon.

It's a good example of what has been called "IT consumerization." The server market is now wrapping up a transition to x86 from other architectures begun in the 1990s, primarily driven by the economies offered by x86's prevalence on the wider market. In an extension of this process, the market may soon make another transition, this time to a closer-to-consumer segment in the x86 space, or at the least using the threat of this transition to force Intel and AMD to take lower margins in the server space.

Physicalization: the theory

The transition to smaller, closer-to-consumer-components servers with lower-power processors and single-socket designs, then, has several advantages, in theory. It can:

Lower costs by using consumer components.

Rebalance the available performance in an I/O direction to be more suitable for loads like hosting and database apps

Reduce (but not eliminate) the need for virtualization, especially in the near term

Reverse, or at least halt, the trend of unnecessarily high-density servers exhausting power and cooling budgets in a small fraction of available rack space

Allow the datacenter to pack more independent servers into every rack unit, for easier marketing of "dedicated hardware" to Web hosting and other customers

The disadvantage is this: disaggregation is, if not a violation, an unorthodox use of Moore's law. As we've covered before, there are good, grounded-in-physics-and-economics reasons why the die-level integration of a standard server, and its multicore design, will be cheaper to produce, and more power-efficient, than a physicalized alternative. The standard server is standard for a reason: it's got roughly the highest available performance per watt and manufacturing economy metrics.

Adventures in physicalization, then, will rest on attempts to efficiently serve those workloads which, like Web hosting, are not suited for the standard server, while emphasizing the cost advantages and finding ways to reduce the hit in performance per watt imposed by the lower level of integration. Let's take a look at the entrants so far, and at one novel entrant which is, thus far, hypothetical.