John Sontag has seen the future—or at least Hewlett-Packard’s version of it. Sontag, vice president and director of Systems Research at HP Labs, has been in charge of the team developing “The Machine,” an experimental piece of computing hardware that HP executives hope will be the template upon which the future of networked computing is built. In an interview with Ars, Sontag explained how the core technologies of The Machine—memristor-based memory and low-cost silicon-to-optic interfaces—will change the shape of computing.

The Machine is a hyper-dense collection of computing hardware that could be used in anything from a data center to a mobile device. It has terabytes of storage and a much smaller power draw than today’s computing devices—all because of memristor-based memory and optical interconnects.

Memristor technology has been around for decades, at least in theory. The concept of the memristor (a combination of the words “memory” and “resistor”) was conceived in 1971 by University of California Berkekey professor Leon Chua—a theoretical fourth kind of passive electronic component.

Electronics textbooks only talk about three passive components in circuits: resistors, inductors, and capacitors. The memristor acts like a resistor in that it reduces the voltage and current passing through an electronic circuit. But the amount of resistance it places on the circuit depends on the current that has previously passed through it—how much is applied and in which direction.

This means that the resistance of the memristor can be manipulated, or “written” to, like memory—flipping it from a binary 0 to 1 or back again, with the application of different amounts of current. The memristor stores that information as an electrical resistance even when no electricity is running across the circuit.

RAM that never forgets

That was the theory, at least. It wasn’t until the last few years, however, that memristors became a real thing.

In 2008, HP Labs senior fellow R. Stanley Williams developed the first functional memristor—a bi-level titanium dioxide film. Because of their passive nature, memristors can be used as a form of nonvolatile memory—they can continue to store data after power is removed, like a magnetic disk or Flash memory, but they can be addressed like RAM.

There is some controversy over whether what Williams developed is actually a memristor because the concept of a memristor itself is seen by some as a violation of the laws of non-equilibrium thermodynamics. But whatever the device actually is, it functions in a way that is completely different from traditional memory.

Further Reading Maintaining Moore’s law with new memristor circuits

“The simplest way to think about it is this—take a DRAM DIMM out, and put a memristor DIMM in,” said Sontag. “You now have another pool of memory that’s denser and nonvolatile. It’s a new class of memory—the consequence for operating systems is that moving stuff around from I/O devices [to and from disk] becomes unnecessary.”

There’s another revolutionary aspect to memristor memory—the number of bits that can be fit into memristor-based memory is much larger than the capacity of dynamic RAM memory elements of the same size. Memristor memory is “between 64 and 128 times denser than DRAM,” Sontag said, “which makes it even denser than disk drives.” And because of that, memristors are a natural fit for systems-on-a-chip or other embedded storage. “We might just bury that memory within a processor socket and have something that sometimes looks like a memory controller and sometimes does processing,” Sontag said.

There’s just one small problem with swapping out all the current RAM in computers—memory that currently loses any information it is storing when a system is shut down or reset. Nonvolatile, passive memristor RAM requires a rethinking of how current operating systems and software use memory.

Some enterprise systems already use nonvolatile RAM based on battery-powered DRAM to help prevent data loss in the event of a power outage. But memristor RAM is an entirely different thing—it could theoretically allow for computers to start processing again in the exact same state they were in before they were disconnected from power. That would make “instant on” devices much more power-efficient, but it would also completely change how operating systems deal with system resets and powering down—they would have to figure out what needed to be kept in memory and what needed to be cleared before restarting. Errors could result if certain areas in memory weren't cleared.

Sontag said that programming languages will also have to change. “In the very long term,” Sontag said, “we have to change the memory semantics of programming languages to make it possible to say what is stored in nonvolatile memory and what isn’t. We need to come to an agreement with the industry about the semantics for nonvolatile RAM.”

As part of an effort to create that agreement, HP is turning to the open source community. “We have a number of approaches for what stays in nonvolatile memory that we’ll sort through in the next year and then take it to the Linux community,” said Sontag.

The memory cloud

The first target of opportunity for memristor RAM, however, won’t require a total overhaul of computing—large in-memory data stores that could take the place of solid-state disk storage and existing in-memory data stores.

“We already have in-memory disks and filesystems that we emulate,” Sontag said. “They use DRAM and just happen to be volatile. You can point an in-memory virtual disk at memristor, and now it is nonvolatile—it preserves the semantics of all the software running it. Software still sees it as a block device—just much quicker response time. Then we’ll work our way into block stores and databases, optimized for this space—flatten their memory hierarchy, and manage how they allocate store in nonvolatile memory. “

That could lead to a changes in how cloud giants like Google and Facebook develop their software—making it a lot less complicated. For example, memristor memory could help remove some of the hassle of dealing with large structured and unstructured databases—particularly the parts that revolve around keeping just the right information in cache. “The challenge with today’s technology is that caches are good if they have a good hitrate,” Sontag said, “but as soon as your workflow starts to change, the cache becomes more and more inefficient.”

Sontag envisions memristor memory eventually finding its way throughout the whole cloud—moving data closer to the endpoint by putting memory caches similar to those used by services like Cloudflare and down to the endpoints themselves.

“The kinds of things we’re doing with nonvolatile memory match up with the mantra of fast startup and low power,” Sontag said. “In the past, mobile applications have been built around a scarcity of memory, power, and computing. Now, networking is the main constraint, and we’re looking at how to approach the economics of machines that are driven more by the mobility than the power envelope.” System-on-a-chip devices could have terabytes of nonvolatile memory built into them, vastly increasing the local storage of mobile devices and allowing network routers and other devices to act as data caches. “We believe we’re moving toward a world where we need to have all the data relevant to what you’re doing at your fingertips so you can use it at the speed of your decision making,” Sontag said.

That world will require a rethinking of the boundaries between “memory” and “storage,” where they reside, and how they’re accessed. The silicon-to-optical interfaces developed as part of the Machine effort could lead to the further deconstruction of the computer forseen by Facebook’s efforts and the Open Compute Project. But HP’s vision is one of a “distributed mesh computing” world, with Machine devices sharing their resources in a much more dense data center—compressing what now takes up a small data center into one or two racks.

The memristor future isn’t that far off—and some parts of it will arrive very soon. HP will start delivering memristor-based RAM DIMMs in 2016, and the Machine itself is expected to be available as a product by 2019. But within the next year, HP will release an open-source Machine OS software developer’s kit and start producing prototypes for collaboration with software vendors.