Persistent memory holds a lot of promise: what's not to like about vast amounts of directly-attached memory that remembers its contents over a power cycle? For some years we have been told that large persistent-memory arrays are coming; now it seems that they are about to arrive. According to Matthew Wilcox, who spoke on the topic at linux.conf.au 2016, Intel wants persistent memory to become a regular platform feature. It will start by shipping server platforms supporting 6TB arrays in 2017. Matthew's question was: how will we make use of that persistent memory?

There are a lot of ideas out there, he said, many of which have been promoted by academics — most of whom are not greatly concerned about practicality. Start, for example, with the idea of total system persistence, where the entire system can be turned off at any time and, when powered back on, will simply pick up where it left off. The problem here is that the CPU caches are not persistent, and there is no easy way to know when all writes to main memory have completed. Whole-system persistence is a "delightful" idea, but it is not something that can be done with today's hardware.

Stepping back a bit, one can try for application persistence — using persistent memory to make cheap application snapshots. Unfortunately, the cache problem exists here as well.

Perhaps what needs to be done is to completely redesign the operating system, creating a new system designed around persistent memory from the beginning. The "new" ideas that proponents of this idea bring up tend to have a familiar ring to them: microkernels, nanokernels, unikernels, etc. Matthew suggested that some people see new technology as an opportunity to push the same ideas they have been promoting for years. He wishes them the best, but is looking for something that will work today.

Some developers at Intel created a new filesystem for persistent memory called pmfs. They are, he said, smart people, but they are not Linux kernel developers. As a result, the work they did is not suitable for production settings.

What can be done today is to package up a persistent-memory array and export it as a fast block device. We have support for that in the kernel now, but, he said, it doesn't feel like a general-purpose solution. To get to that more general solution, he set out to make some small modifications to existing filesystems so that they could make use of persistent memory; he allowed that it was perhaps good that Dave Chinner was unable to attend the conference, since Dave might just quibble with Matthew's notion of "small". In any case, Matthew started with the existing execute-in-place support, rewrote it, and ended up with the subsystem now known as DAX.

Beyond filesystems

There is more to proper persistent-memory support than getting the filesystems to work, though. Down at the CPU level, Intel's designers have created a new set of instructions for use with persistent memory. The intent is to allow developers to initiate the flushing of specific cache lines; it is not possible to know when a write completes, but the operation can at least be started when needed. The CLFLUSH instruction has existed for years; its job is to remove a line of data from the cache. It is not optimal for persistent-memory applications, though, because it serializes the instruction stream, hurting performance.

That shortcoming will be addressed with the CLFLUSHOPT instruction, which does not perform that serialization. In the future, there will also be CLWB , which starts writing out the cache line but does not remove the data from the cache. Finally, the PCOMMIT instruction will ensure that all data written prior to the last store fence is persistent. It may not be written to a specific persistent-memory array, but it will be in persistent storage somewhere.

There is still the question of how application programmers should make use of persistent memory. One option would be to create a special-purpose programming language that natively understands persistence, but that, he said, is not particularly interesting. A bit more practical might be to modify the runtime virtual machines for language like Python to get them to issue the new instructions when needed. That would avoid the need for code changes in general, but is still not entirely interesting to "dinosaurs" like him, who want to program in languages like C.

For such people, there is a whole set of new libraries available. At the lowest level, libpmem simply provides easy access to the new instructions, but adds little otherwise. The libvmmalloc library supplies a replacement for malloc() that can allocate from persistent memory. It can also be used for non-persistent applications; indeed, applications can be linked against this library and be unaware that they are using persistent memory at all. For developers who are willing to code specific persistent-memory awareness into their applications, there is libvmem . It provides better performance on persistent memory, but is still mainly expected to be used with memory used as if it were volatile.

Developers wanting utilities to help with the storage of persistent memory can use libpmemblk , which provides access to atomic blocks of memory. Log-oriented applications, which append data to an existing structure, can use libpmlog to manage logs easily in persistent memory. But the interesting one, he said, is libpmemobj , which provides a transactional object store on top of persistent memory. It provides locking that persists over system boots, type safety, and data structures like doubly linked lists and a key-value store. It can handle replication of data across files. And, for those who are so inclined, C++ support has been added in recent months.

Quite a bit of functionality has been built on these libraries, he said. There is, for example, a MySQL storage engine that uses it. Interested developers can go to pmem.io, which hosts a blog, pointers to the source, and other information. There is also intel.com/nvm, which has mostly marketing material about the upcoming persistent-memory hardware. It appears that, after years of hype, the hardware will soon be available, and the software support will be there for it as well.

The video of this talk is available for those wanting more information.

[Your editor thanks LCA for assisting with his travel expenses.]

Comments (17 posted)

Scratching an itch is a recurring theme in presentations at linux.conf.au. As the open-hardware movement gains strength, more and more of these itches relate to the physical world, not just the digital. David Tulloh used his presentation [WebM] on the “Linux Driven Microwave” to discuss how annoying microwave ovens can be and to describe his project to build something less irritating.

Tulloh's story began when he obtained a microwave oven, admittedly an inexpensive one, with a user interface even worse than the norm. Setting the time required pressing buttons so hard that the microwave tended to get pushed away — a fact that was elegantly balanced by the door handle requiring a sufficiently hard tug to return the oven to its original position. While this is clearly an extreme case, Tulloh lamented that microwave ovens really hadn't improved noticeably in recent decades. They may have gotten a little cheaper and gained a few features that few people could use without poring over the instruction manual — the implied contrast to smartphones, which are widely used with little instruction, was clear.

This microwave oven was not a lost cause — it gave its life to the greater good and became the prototype for an idea that Tulloh hopes to turn into a crowd-funded project if he can find the right match between features and demand: a Linux-driven microwave oven.

Adding novelty

Adding a smartphone-like touchscreen and a network connection and encouraging a community to build innovative apps such as recipe sharing are fairly obvious ideas once you think to put “Linux” and “microwave oven” together, but Tulloh's vision and prototype lead well beyond there. Two novel features that have been fitted are a thermal camera and a scale for measuring weight.

The thermal camera provides an eight-by-eight-pixel image of the contents of the oven with a precision of about two degrees. This is enough to detect if a glass of milk is about to boil over, or if the steak being thawed is in danger of getting cooked. In either case, the power can be reduced or removed. If appropriate, an alert can be sounded. This would not be the first microwave to be temperature sensitive — GE sold microwave ovens with temperature probes decades ago — but an always-present sensor is much more useful than a manually inserted probe, especially when there is an accessible API behind it.

The second innovation is a built-in scale to weigh the food (and container) being cooked. Many recipes give cooking-time guidance based on weight and some microwave ovens allow you to enter the weight manually so it can do a calculation for you. With built-in scales, that can become automatic. Placing a scale reliably under the rotating plate typical of many microwave ovens would be a mechanical challenge that Tulloh did not think worth confronting. Instead his design is based on the “flat-plate” or “flat-bed” style of oven — placing a sensor at each of the four corners is mechanically straightforward and gives good results.

Once you have these extra sensors — weight and temperature — connected to a suitable logic engine, more interesting possibilities can be explored. A cup of cold milk from the fridge will have a particular weight and temperature profile with a modest degree of error. Tulloh suggested that situation could be detected and some relevant options such as “Boil” or “Warm” could be offered for easy selection (a mock up of the interface is at right, a clickable version is here). Simple machine learning could extend this to create a personalized experience. It would be easy to collect a history of starting profiles and cooking choices; when those patterns are detected, the most likely cooking choices could be made the easiest to select.

Overcoming staleness

Beyond just new functionality, Tulloh wants to improve the functionality that already exists. Door handles as stiff as on Tulloh's cheap microwave may not be common, but few microwave oven doors seem designed to make life easy for people with physical handicaps. There are regulatory restrictions, particularly in the US, that require the oven to function only if there is positive confirmation that the door is actually shut. This confirmation must be resilient against simple fraud, so poking a stick in the hole must not trick the oven into working with the door open. In fact, there must be two independent confirmations and, if they disagree, a fuse must be blown so that a service call is required. Tulloh believes that a magnetic latch would provide much greater flexibility (including easy software control) and that magnetic keying similar to that used in a magnetic keyed lock would allow the magnetic latch to pass certification.

Another pain point with microwave ovens is the annoying sounds they make. Tulloh has discarded the beeper and hooked up a speaker to the Banana Pi that is controlling his prototype. This allows for more pleasant and configurable alerts as well as for advice and guidance through a text-to-speech system. Adding a microphone for voice control is an obvious next step.

Many microwave ovens can do more than just set a time and a power level — they provide a range of power profiles for cooking, warming, defrosting, and so on. Adding precise temperature sensing will allow the community to extend this range substantially. A question from Andrew Tridgell in the audience wondered if tempering chocolate — a process that requires very precise temperature control — would be possible. Tulloh had no experience with the process, and couldn't make promises, but thought it was certainly worth looking in to. Even if that doesn't work out, it shows clear potential for value to be gained from community input.

Availability

Tulloh would very much like to get these Linux-enabled microwave ovens out into the world to create a community and see where it goes. Buying existing ovens and replacing the electronics is not seen as a viable option. The result would be ugly and, given that a small-run smart microwave will inevitably cost more, potential buyers are going to want something that doesn't look completely out of place in their kitchen.

Many components are available off-the-shelf (magnetron, processor board, thermal sensor) and others, such as a USB interface for the thermal sensor, are easily built. Prototype software is, of course, already available on GitHub. The case and door are more of a challenge and would need to be made to order. Tulloh wants to turn this adversity into an opportunity by providing the option for left-handed microwave ovens and a variety of colors.

A quick survey of the audience suggested that few people would hastily commit to his target price of $AU1000 for a new, improved, open oven. Whether a bit more time for reflection and a wider audience might tip the balance is hard to know. The idea is intriguing, so it seems worth watching Tulloh's blog for updates.

Comments (44 posted)