"What The Hardware Does" is not What Your Program Does: Uninitialized Memory

This post is about uninitialized memory, but also about the semantics of highly optimized “low-level” languages in general. I will try to convince you that reasoning by “what the hardware does” is inherently flawed when talking about languages such as Rust, C or C++. These are not low-level languages. I have made this point before in the context of pointers; this time it is going to be about uninitialized memory.

The trigger for this post is the deprecation of mem::uninitialized() with Rust 1.36, but the post is just as relevant for C/C++ as it is for Rust.

The pitfalls of uninitialized memory

When you allocate some memory (whether it be on the stack or heap), and you do not initialize it, what are its contents? We call this “uninitialized memory”, but what exactly does this mean and what happens when it gets read? For many languages, this question is inconsequential: in Java, Haskell, OCaml and generally in all safe languages, uninitialized memory cannot be read, this is prevented by the type system. The same is true in safe Rust, actually. However, in unsafe Rust as well as in inherently unsafe languages such as C and C++, not having to initialize memory can be an important optimization, so this is a very relevant question.

The C and C++ specifications (without going into all the detail here) say that uninitialized memory is “indeterminate”, but the details of what exactly that means are unclear. Many people will tell you that “uninitialized memory contains a random bit pattern”. This is wrong. They might also talk about the system allocator or how the OS kernel allocates pages for the program to use. That is just irrelevant information.

Here is an example to demonstrate why “random bit pattern” cannot describe uninitialized memory:

use std :: mem ; fn always_returns_true ( x : u8 ) -> bool { x < 150 || x > 120 } fn main () { let x : u8 = unsafe { mem :: uninitialized () }; assert ! ( always_returns_true ( x )); }

always_returns_true is a function that, clearly, will return true for any possible 8-bit unsigned integer. After all, every possible value for x will be less than 150 or bigger than 120. A quick loop confirms this. However, if you run the example, you can see the assertion fail.

What is uninitialized memory?

How is this possible? The answer is that, in the “abstract machine” that is used to specify the behavior of our program, every byte in memory cannot just have a value in 0..256 (this is Rust syntax for a left-inclusive right-exclusive range), it can also be “uninitialized”. Memory remembers if you initialized it. The x that is passed to always_return_true is not the 8-bit representation of some number, it is an uninitialized byte. Performing operations such as comparison on uninitialized bytes is undefined behavior. As a consequence, our program has undefined behavior, so we should not be surprised that it acts “weirdly”.

Of course, there is a reason for this undefined behavior; there is a reason the “abstract machine” is defined the way it is. Compilers don’t just want to annoy programmers. Ruling out operations such as comparison on uninitialized data is useful, because it means the compiler does not have to “remember” which exact bit pattern an uninitialized variable has! A well-behaved (UB-free) program cannot observe that bit pattern anyway. So each time an uninitialized variable gets used, we can just use any machine register—and for different uses, those can be different registers! In the case of our example, the program actually compares such an “unobservable” bit pattern with a constant, so the compiler constant-folds the result to whatever it pleases. Because the value is allowed to be “unstable”, the compiler does not have to make a “consistent choice” for the two comparisons, which would make such optimizations much less applicable. So, one time we “look” at x the compiler can pretend it is at least 150, and then when we look at it again it is at most 120, even though x did not change. That explains why our compiled example program behaves the way it does. This LLVM document gives some more motivation for “unstable” uninitialized memory.

When thinking about Rust (or C, or C++), you have to think in terms of an “abstract machine”, not the real hardware you are using. Imagine that every byte in memory is either initialized to some value in 0..256 , or uninitialized. You can think of memory as storing an Option<u8> at every location. When new memory gets allocated for a local variable (on the stack) or on the heap, there is actually nothing random happening, everything is completely deterministic: every single byte of this memory is marked as uninitialized. Every location stores a None . (In LLVM, this None corresponds to poison , which has the potential to replace undef entirely.)

When writing safe Rust, you do not have to worry about this, but this is the model that is good to have in your head when dealing with uninitialized memory in unsafe code. Alexis wrote a great post on which APIs to use for that in Rust; there is no need for me to repeat all that here. (In that post, Alexis says that every bit can be either 0, 1 or uninitialized, as opposed to every byte being initialized or not. Given that memory accesses happen at byte granularity, these two models are actually equivalent, at least in Rust which does not have C-style bitfields.)

What can you do with uninitialized memory?

Now that we have a concrete way to talk about uninitialized memory, we can talk about which operations are allowed on values involving uninitialized bytes. My interpretation of the rules in C/C++, and my proposal for the rules in Rust, is that any operation working on the “value” of an integer (arithmetic and logical operations, comparisons, conditional jumps) is undefined behavior if any input is uninitialized. In particular, x + 0 is UB if x is not initialized. However, this still leaves many questions open, such as whether even just creating an uninitialized u8 is undefined behavior in Rust (which is the subject of active discussion), or what happens when some but not all bytes of the input are uninitialized. Over time, we will come to some kind of compromise here. The important part (for both Rust and C/C++) however is that we have this discussion with a clear mental model in our minds for what uninitialized memory is. I see Rust on a good path here; I hope the C/C++ committees will eventually follow suit.

Ruling out any operation on uninitialized values also makes it impossible to implement this cute data structure. The is-member function there relies on the assumption that “observing” an uninitialized value ( sparse[i] ) twice gives the same result, which as we have seen above is not the case. This could be fixed by providing a “freeze” operation that, given any data, replaces the uninitialized bytes by some non-deterministically chosen initialized bytes. It is called “freeze” because its effect is that the value “stops changing each time you observe it”. is-member would freeze sparse[i] once and then know for sure that “looking at it” twice will give consistent results. Unfortunately, since C/C++ do not acknowledge that their memory model is what it is, we do not have crucial operations such as “freeze” officially supported in compilers. At least for LLVM, that might change though.

“What the hardware does” considered harmful

Maybe the most important lesson to take away from this post is that “what the hardware does” is most of the time irrelevant when discussing what a Rust/C/C++ program does, unless you already established that there is no undefined behavior. Sure, hardware (well, most hardware) does not have a notion of “uninitialized memory”. But the Rust program you wrote does not run on your hardware. It runs on the Rust abstract machine, and that machine (which only exists in our minds) does have a notion of “uninitialized memory”. The real, physical hardware that we end up running the compiled program on is a very efficient but imprecise implementation of this abstract machine, and all the rules that Rust has for undefined behavior work together to make sure that this imprecision is not visible for well-behaved (UB-free) programs. But for programs that do have UB, this “illusion” breaks down, and anything is possible.

Only UB-free programs can be made sense of by looking at their assembly, but whether a program has UB is impossible to tell on that level. For that, you need to think in terms of the abstract machine.

This does not just apply to uninitialized memory: for example, in x86 assembly, there is no difference between “relaxed” and “release”/”acquire”-style atomic memory accesses. But when writing Rust programs, even when writing Rust programs that you only intend to compile to x86, “what the hardware does” just does not matter if your program has UB. The Rust abstract machine does make a distinction between “relaxed” and “release”/”acquire”, and your program will go wrong if you ignore that fact. After all, x86 does not have “uninitialized bytes” either, and still our example program above went wrong.

Of course, to explain why the abstract machine is defined the way it is, we have to look at optimizations and hardware-level concerns. But without an abstract machine, it is very hard to ensure that all the optimizations a compiler performs are consistent—in fact, both LLVM and GCC suffer from miscompilations caused by combining optimizations that all seem fine in isolation, but together cause incorrect code generation. The abstract machine is needed as an ultimate arbiter that determines which optimizations can be safely combined with each other. I also think that when writing unsafe code, it is much easier to keep in your head a fixed abstract machine as opposed to a set of optimizations that might change any time, and might or might not be applied in any order.

Unfortunately, in my opinion not enough of the discussion around undefined behavior in Rust/C/C++ is focused on what concretely the “abstract machine” of these languages looks like. Instead, people often talk about hardware behavior and how that can be altered by a set of allowed optimizations—but the optimizations performed by compilers change as new tricks are discovered, and it’s the abstract machines that define if these tricks are allowed. C/C++ have extensive standards that describe many cases of undefined behavior in great detail, but nowhere does it say that memory of the C/C++ abstract machine stores Option<u8> instead of the u8 one might naively expect. In Rust, I am very happy that we have Miri, which is meant to be (very close to) a direct implementation of the Rust abstract machine, and I am strongly advocating for us to write the Rust specification (once that gets going) in a style that makes this machine very explicit. I hope C/C++ will come around to do the same, and there is some great work in that direction, but only time will tell to what extend that can affect the standard itself.

If you want to do me a favor, please spread the word! I am trying hard to combat the myth of “what the hardware does” in Rust discussions whenever I see it, but I obviously don’t see all the discussions—so the next time you see such an argument around and undefined behavior is involved, no matter whether it is about uninitialized memory or concurrency or out-of-bounds memory accesses or anything else, please help by steering the discussion towards “what the Rust abstract machine does”, and how we can design and adjust the Rust abstract machine in a way that it is most useful for programmers and optimizing compilers alike.

As usual, if you have any comments, suggestions or questions, let me know in the forums.