Two and a half years ago, HP (now HP Enterprise after the company split) revealed a new, revolutionary computer architecture it dubbed “The Machine.” This new computing platform would combine cutting-edge and still unproven technologies like memristors, silicon photonics, and truly massive amounts of addressable memory. HPE was forced to dial back some of its ambition when it proved too difficult to bring the entire project to market all at the same time, but it refused to give up on the idea of what it calls “Memory-Driven Computing.”

Today, HPE is announcing that it has demonstrated the major components of this new type of system, albeit in prototype form. The Machine as currently constituted consists of:

Compute nodes accessing a shared pool of Fabric-Attached Memory

An optimized Linux-based operating system (OS) running on a customized System on a Chip (SOC)

Photonics/Optical communication links, including the new X1 photonics module, are online and operational

New software programming tools designed to take advantage of abundant persistent memory.

HPE has previously shown off some of these components, like its X1 silicon photonics module. The X1 module is capable of transferring data at up to 1.2Tbps (150GB/s of bandwidth) over a 30-50 meter distance. HPE has also demonstrated silicon photonics technology that can move data up to 50 kilometers (30 miles) at 200Gbps. HPE’s major goal with The Machine is to create a system in which non-volatile memory (NVM)serves as a true DRAM replacement, offering at least equivalent latency with drastically reduced power consumption and low-latency optical interconnects.

Customers will still have the option to deploy The Machine as a conventional system but HPE’s plan is to offer huge pools of NVM that can be shared across many SoCs. While the diagrams below only refer to CPUs, there’s no reason this model couldn’t be extended to other types of accelerators — vector processors like Intel’s Xeon Phi or GPUs from AMD and Nvidia could at least theoretically be paired with HPE’s new architecture. The following slideshow steps through some of HPE’s design elements, and the benefits it expects to offer with The Machine compared to traditional systems. Images can be clicked to enlarge them in a new window.

When HPE announced that it would re-purpose The Machine’s design around conventional technology in mid-2015, it seemed to imply that the project’s groundbreaking potential had been largely buried beneath financial realities and the slow pace of technological innovation that characterizes modern semiconductor development. Today, I have to acknowledge that this dismissal was premature. HPE may not be planning to commercialize memristor technology in the near term, but The Machine is more than a conventional server with a huge amount of RAM, and the company’s work on low-latency non-volatile memory and optical interconnects could have significant implications for HPC and Big Data problems for years to come. The Machine won’t debut as a single system with all-new technologies but should transition to new memory standards as they become available. This might make it a touch less exciting on launch, but should lay the groundwork for a long-term virtuous cycle of improved performance and reduced power consumption, up to and including (eventually) exascale-class deployments.