Clarifying the C memory object model (n2012) Kayvan Memarian and Peter Sewell, University of Cambridge

We have been working for some time to clarify the memory object model of C: the behaviour of pointer operations, uninitialised values, padding, effective types, and so on.

This continues our previous work, with Batty et al., on the concurrency model of C++ and C. There, by developing a precise formal model for the behaviour of atomics, we identified problems in the C++0X draft standard, and we worked with the WG21 concurrency subgroup and with WG14 to propose fixes that were taken up in C++11 and C11. The final standard text for atomics is in close correspondence with our formal model, and this has enabled work on compiler testing, optimisation, and verification.

For the C memory object model, there are problems of several different kinds:

in some areas the ISO C11 text is unclear; and

in some areas there are differences between the properties of C that are relied on by the corpus of systems code out there, the properties that compiler implementers aim to provide, and the properties of C as it is specified in ISO C11.

Some of the latter seem to arise from the fact that the ISO standard has been written to accommodate a very wide range of hardware platforms and compiler implementations, many of which are now obsolete, while much current systems software depends on stronger properties that hold for "mainstream" current common implementations. Others are real differences between the properties assumed by systems code and those that compilers aim to provide.

To investigate these problems:

We conducted a survey among C experts to discover what they believe about the properties that systems software relies on in practice, and what current implementations provide. We had 323 responses, most claiming expertise in C systems programming and many claiming expertise in compiler internals and/or in the ISO standard. The results are summarised here (n2014): http://www.cl.cam.ac.uk/~pes20/cerberus/notes50-survey-discussion.html (and with all the comments: http://www.cl.cam.ac.uk/~pes20/cerberus/analysis-2016-02-05-anon.txt (n2015)). The main thing we see in the results is the level of disagreement, both among the responses and between them and ISO C11. ANSI C89 aimed to "develop a clear, consistent, and unambiguous Standard for the C programming language", but ISO C is clearly not currently providing that for these people. This situation is also problematic for those developing safer C dialects, e.g. with fat pointers or hardware capabilities. We draw upon the experimental data on the use of particular idioms in practice from Chisnall et al..

We accumulated a collection of 85 specific questions about C memory object model behaviour, supported by a collection of semantic test cases (short C programs that illustrate each question), and discussed from ISO and C-in-practice points of view.

We have data from running those test cases on a variety of implementations (different versions and flag combinations of GCC and Clang) and tools (various sanitisers and C semantics tools). If you think it would be interesting to run them on others, please mail us and we can send you a tarball of the tests, with an automated wrapper to log the results.

We are building a formal model, Cerberus, for a large part of C (aiming to cover most of Sections 5 and 6 of ISO C11 except floats). Cerberus aims to be executable as a test oracle for small programs, exhaustively calculating all their allowed behaviours. Where possible, Cerberus is in close correspondence with the ISO text. Doing this has prompted many of those 85 questions, as the model must take a position on each.

We summarise some of the most important questions at (n2012):

http://www.cl.cam.ac.uk/~pes20/cerberus/notes64-wg14.html

For each, we would very much like to get a clear understanding of what WG14 members think the ISO C11 view of each of these questions is, and how that relates to current practice. In many cases we have suggestions for possible clarifications or changes to reconcile the standard and current practice (both compiler behaviour and usage) that we would like to discuss. We have not attempted to draft specific proposals for changes to the standard text here, but we can do that too if there seems to be consensus on the desired intended semantics.

If you are prepared to go through each of the questions in detail (though beware that this may take some time), we have a google form to record your responses in a convenient way:

http://www.cl.cam.ac.uk/users/pes20/cerberus/survey3.html

We want to distribute this only to particular focussed groups (not as a mass survey), as otherwise analysing the results becomes prohibitive, so please do not link to the google form elsewhere.

To keep this note as brief as possible, we haven't included the semantic test-case programs for each question. They are available in our "notes30" (N2013), available from:

http://www.cl.cam.ac.uk/users/pes20/cerberus/notes30.pdf

which you should refer to while looking at this. We hope to attend some of the April 2016 London WG14 meeting to discuss these issues.

[See Question 2/15 of our survey and Section 3 (Q47) of our N2013]

In ISO C11 (following C99) trap representations are particular object representations that do not represent values of the object type, for which merely reading a trap representation (except by an lvalue of character type), is undefined behaviour. See 6.2.6.1p5, 6.2.6.2p2, DR338. An "indeterminate value" is either a trap representation or an unspecified value.

Trap representations complicate the language, and it is not clear whether they are significant in practice for current C implementations:

for integer types it appears not: 6.2.6.1p5 makes clear that trap representations are particular concrete bit patterns, and in the most common implementations (which are 2's-complement and use all the bits) there are no representation values that do not represent an abstract integer value.

there is discussion of the Itanium NaT flag, but that is not a memory-value-representable entity.

there is the possibility of pointers in segmented architectures in which reading a pointer value does some dynamic check. We do not know of actual examples of this.

there is the case of floating-point "Signalling NaNs". This seems the most plausible case, but in many environments they seem to be disabled, and in others it seems that users would want IEEE behaviour, not undefined behaviour.

Accordingly, we suggest either:

If there are current implementations where trap representations are significant, that ISO C make the sets of trap representation values for each type be implementation-defined, thereby requiring implementations to document which representations are trap representations (and hence, in the common case that there are none, to document that), or, otherwise, To remove trap representations from the standard, and hence to coalesce the concepts of "indeterminate value" and "unspecified value" into one.

[See Question 2/15 of our survey, Sections 3.1 and 3.2 (Q48-59) of our N2013, DR338, DR451, and N1793]

For reading uninitialised values, there are many possible semantics, ranging from a fully concrete semantics in which such a read is guaranteed to give the actual contents of memory through to one in which it gives undefined behaviour. Our survey of C as used and implemented in practice gave very mixed numerical responses:

Is reading an uninitialised variable or struct member (with a current mainstream compiler):

undefined behaviour (meaning that the compiler is free to arbitrarily miscompile the program, with or without a warning): 139 (43%) going to make the result of any expression involving that value unpredictable: 42 (13%) going to give an arbitrary and unstable value (maybe with a different value if you read again): 21 (6%) going to give an arbitrary but stable value (with the same value if you read again): 112 (35%)

However the comments were fairly clear on two points. First, this does arise in practice, e.g. when copying a partially initialised struct, (more rarely) when comparing against one, and in debugging. Second, it appears that some current mainstream compilers (including GCC and Clang) do optimise in ways that would make (d) unsound, while others (perhaps MSVC) may have more deterministic behaviour. None appeared to assume undefined behaviour in this situation.

This suggests that the most useful semantics (that permits current implementation behaviour without being needlessly weak for programmers) gives an symbolic unspecifed value (for reads of uninitialised memory, roughly (b) above.

In Sections 3.1 and 3.2 of our N2013 we collect 10 more specific questions (Q48-59), many of which are not directly addressed in ISO C11. We give concrete examples for each there, but for brevity here we just discuss briefly the current ISO position and (where we have one) our suggested choice.

Q48 Does reading an uninitialised object give rise to undefined behaviour?

In ISO C11, for types that (in the particular implementation in question) do not have any trap representations, this is undefined iff "the lvalue designates an object of automatic storage duration that could have been declared with the register storage class (never had its address taken)" (see 6.3.2.1p2 and DR338). This seems to have been added to cope with the Itanium NaT, and presumably has to be retained for such implementations. But for others it complicates and weakens the language for no purpose.

We suggest this be made an implementation-defined choice.

Q49 Can library calls with unspecified-value arguments be assumed to execute with an arbitrary choice of a concrete value (not necessarily giving rise to undefined behaviour)?

ISO C11 is unclear. The DR451 CR says "library functions will exhibit undefined behavior when used on indeterminate values" but here we are more specifically looking at unspecified values. We see no benefit from making this undefined behaviour, and we are not aware that compilers assume so (unless, conceivably, the Itanium NaT also requires this). It prevents (e.g.) debug printing of partially uninitialised structs.

We suggest "yes" (except for library functions which have an undefined behaviour for specific concrete values, which, similarly to Q54 below, should also have undefined behaviour if given unspecified values).

Q50 Can control-flow choices based on unspecified values be assumed to make an unspecified (arbitrary) choice (not giving rise to undefined behaviour)?

ISO C11 is unclear (it does not discuss this). We suggest "yes".

Q51 In the absence of any writes, is an unspecified value potentially unstable, i.e., can multiple usages of it give different values?

Q52 Do operations on unspecified values result in unspecified values?

As mentioned above, current mainstream compiler optimisations seem to require these to both be "yes". The DR451 CR is "yes" for the analogous questions for indeterminate values. We suggest "yes" for these (note this would make the N1793 Fig.4 printhexdigit not useful when applied to an uninitialised structure member).

Q53 Do bitwise operations on unspecified values result in unspecified values?

We suggest "yes" for this also, giving the simple semantics that all operations on unspecified values give unspecified values.

(Note that the LLVM documentation gives stronger guarantees for particular operations, as discussed in 3.2.4 of our notes30.pdf) but the utility of those is unclear to us, and they seem specific to LLVM).

Q54 Must unspecified values be considered daemonically for identification of other possible undefined behaviours?

This seems forced by the above: if x has an unspecified value, then 1/x might in practice trap, and so should be considered as having undefined behaviour. We suggest "yes".

Q55 Can a structure containing an unspecified-value member can be copied as a whole?

This seems to be relied on in practice, and consistent with the "symbolic unspecified value" semantics we have so far, so we suggest "yes". The copy will have an unspecified value for the same member.

Q56 Given multiple bitfields that may be in the same word, can one be a well-defined value while another is an unspecified value?

We suggest "yes".

Q57 Are the representation bytes of an unspecified value themselves also unspecified values? (not an arbitrary choice of concrete byte values)

The best answer to this is unclear from all points of view: ISO C11 doesn't address the question; we don't know whether existing compilers assume these are unspecified values, and we don't know whether existing code relies on them not being unspecified values.

For stylistic consistency one might take the answer to be "yes", but then (given the suggested answers above) a bytewise hash or checksum computation involving them would produce an unspecified value. In a more concrete semantics, it could produce different results in different invocations, even if the value is not mutated in the meantime.

We don't have sufficient grounds to suggest either answer at present.

Q58 If one writes some but not all of the representation bytes of an uninitialized value, do the other representation bytes still hold unspecified values?

This too is unclear. One could take the first such access as "freezing" the unspecified value and its representation bytes, but we don't know whether that would be sound with respect to current compiler behaviour. The simplest choice is "yes".

Q59 If one writes some but not all of the representation bytes of an uninitialized value, does a read of the whole value still give an unspecified value?

Again "yes" is the simplest choice, but one could argue instead that a read of the whole should give any nondeterministically chosen value consistent with the concretely written bytes.

[See Question 1/15 of our survey, and Section 3.3 (Q60-68) of our N2013]

The standard discusses two quite different kinds of padding: padding bits within the representation of integer types (6.2.6.2), and padding bytes in structures and unions. Here we consider just the latter, together with the space between the end of a union's current member and the size of the maximally sized member of its union type (the standard does not refer to this as padding (6.2.6.1p7) but it behaves in a similar way).

Padding bytes might be needed either for alignment or to ensure that there is spare space that the implementation is free to overwrite with a "wide" write, where the hardware does not provide efficient store instructions for the native width of the value to be written.

There are several options for the semantics of padding, including:

- (a) regarding padding bytes as holding unspecified values throughout the lifetime of the object, irrespective of any writes to them;

- (b) when a member of a struct or union is written, deeming the semantics as also having written symbolic unspecified values to all its padding bytes;

- (c) when a member is written, deeming the semantics as also having written symbolic unspecified values to adjacent padding;

- (d) when a member is written, deeming the semantics as also having written symbolic unspecified values to subsequent padding;

- (e) when a member is written, nondeterministically either deeming the semantics as having written zeros to the adjacent padding or leaving it alone; or

- (f) when a member is written, nondeterministically either deeming the semantics as having written zeros to the subsequent padding or leaving it alone.

The standard is unclear which semantics it chooses. On the one hand, we have 6.2.6.1p6: "When a value is stored in an object of structure or union type, including in a member object, the bytes of the object representation that correspond to any padding bytes take unspecified values.", suggesting option (b), and 6.7.9p10 says: "If an object that has static or thread storage duration is not initialized explicitly, then [...] any padding is initialized to zero bits", suggesting that padding can meaningfully hold concrete (non-unspecified) values, so not option (a). But then 7.24.4.1 The memcmp function implies that padding bytes within structures always hold unspecified values, which is option (a): Footnote 310 "The contents of `holes' used as padding for purposes of alignment within structure objects are indeterminate." (even in the standard there are no trap representations here so indeterminate values are unspecified values).

In practical usage this matters in several ways:

Sometimes programmers need a mechanism to ensure that no security-relevant information is accidentally leaked by the implementation via (unspecified-value) padding bytes, in cases where it is impractical to ensure that structures have no padding.

Some important code manipulates structs via their representation bytes, e.g. when marshalling them to the network. That must have defined behaviour, and this interacts with the semantics of unspecified values. In particular, if one takes library calls involving unspecified values to be undefined behaviour, as suggested in DR451 CR, then (a) would rule out all such code. Moreover, it is desirable for such code to behave as determinstically as possible.

C11 supports compare-and-swap (CAS) operations on atomic structs, so these comparisons have to be supported with well-defined values - at the very least, in the absence of writes.

On the implementation side, we have not seen current implementations actually do wide writes for single member writes, though a few survey respondents say they have. Whether such implementations guarantee (or could reasonably be made to guarantee) that the extra bytes are zeroed is unknown to us. Rather more respondents believe that compilers will assume that padding contains unspecified values and will optimise away reads of it (effectively (a)), but we don't have a definite answer for that either. Multiple accesses of adjacent members, e.g. in a structure copy, might be aggregated into accesses that also read and write intervening padding.

In the platforms that we are familar with, padding is determined by the ABI specification of type layout (together with compiler flags or pragmas that permit structs to be packed, but we ignore those here), and "Each member is assigned to the lowest available offset with the appropriate alignment" (AMD64 ABI, for example). That means one cannot know the padding following a member without knowing the subsequent member type, which means this interacts with the semantics for type punning between related struct types that share a prefix (undefined behaviour by ISO but apparently widely used nonetheless).

There is also an interaction with the concurrency semantics: it is legal for threads to write to adjacent members of a struct without any synchronisation (this does not comprise a data race), so if one chooses any of (b,c,e), one must take care to ensure that those notional writes to padding do not give rise to spurious data races in the semantics.

Our best suggestion at present is (d): allow padding to contain non-unspecified-value values, and, when a member is written, deem the semantics as also having written symbolic unspecified values to subsequent padding.

In Section 3.3 of our N2013 we collect 9 specific questions (Q60-68) which address some of these choices. We give concrete examples for each there, but for brevity just summarise here.

Q60 Can structure-copy copy padding?

Implementations have to be allowed to do a structure copy by copying all the bytes of the structure, which will copy padding, or by copying just the members, which will not. Options (a,b,c,d) permit this implicitly as a consequence of the member-write semantics; options (e,f) would need structure writes to be special-cased (and that might cause problems w.r.t. aggregation of member writes into structure writes).

Q61 After an explicit write of a padding byte, does that byte hold a well-defined value? (not an unspecified value)

Q62 After an explicit write of a padding byte followed by a write to the whole structure, does the padding byte hold a well-defined value? (not an unspecified value)

Q63 After an explicit write of a padding byte followed by a write to adjacent members of the structure, does the padding byte hold a well-defined value? (not an unspecified value)

Q64 After an explicit write of zero to a padding byte followed by a write to adjacent members of the structure, does the padding byte hold a well-defined zero value? (not an unspecified value)

Q65 After an explicit write of a padding byte followed by a write to a non-adjacent member of the whole structure, does the padding byte hold a well-defined value? (not an unspecified value)

Q66 After an explicit write of a padding byte followed by a writes to adjacent members of the whole structure, but accessed via pointers to the members rather than via the structure, does the padding byte hold a well-defined value? (not an unspecified value)

These discriminate between most of the above options (we could but have not yet added tests that discrimate between the "adjacent" and "subsequent" variants).

Our (currently preferred) Option (d) gives (yes, no, no, no, yes, yes) for these.

Q67 Can one use a malloc'd region for a union that is just big enough to hold the subset of members that will be used?

This is a corner case that is apparently used in practice but which could cause problems for (b-f): in general one could not know how much memory to write notional unspecified values (or zeros) to, so the answer would have to be "no". One could intersect with the allocation footprint to allow it in some cases.

Q68 Can the user make a copy of a structure or union by copying just the representation bytes of its members and writing junk into the padding bytes?

This addresses the question of whether an implementation is allowed to use padding for its own purposes, to maintain metadata. We believe not, and hence that Q68 should be "yes".

[See Questions 3/15, 4/15, and 5/15 of our survey, Section 2.1-2.9 (Q1-20) of our N2013, and DR260]

C pointer values could traditionally be considered to be concrete numeric values (our survey indicates many still do). However, the DR260 Committee Response suggests otherwise, hinting at a notion of provenance being carried by pointer:

"Implementations are permitted to track the origins of a bit-pattern and treat those representing an indeterminate value as distinct from those representing a determined value. They may also treat pointers based on different origins as distinct even though they are bitwise identical."

Current compilers appear to follow this, using it to justify alias analysis based on provenance distinctions. However, DR260CR leaves many questions unclear. We enumerate those (with examples) in our notes30.pdf; here we suggest a specific proposal for a provenance-aware semantics and discuss how it addresses those questions.

The basic idea is to associate a provenance with every pointer value, essentially identifying the original allocation the pointer is derived from. This is for the "C abstract machine" as defined in the standard: compilers might rely on provenance, but one would not expect normal implementations to record or manipulate provenance at runtime (though dynamic or static analysis tools might).

Pointer values and integer values both carry a provenance, either the "empty" provenance, a single provenance ID, or the "wildcard" provenance.

On every allocation (of objects with static, thread, automatic, and allocated storage duration), we choose a fresh provenance ID (unique in the entire execution), and the resulting pointer value carries that single provenance ID.

At any access via a pointer value, its numeric address must be consistent with its provenance, with undefined behaviour otherwise. In particular: access via a pointer value with empty provenance is undefined behaviour (except where the numeric value is within an implementation-defined set of "device" memory addresses); access via a pointer value with a single provenance ID must be within the corresponding allocation; access via a pointer value with wildcard provenance must be within some currently live object.



Then there are many specific choices of how provenance is affected by arithmetic operations and suchlike. We first discuss the questions and then summarise our proposal.

Q1. Must the pointer used for a memory access have the right provenance, i.e. be derived from the pointer to the original allocation (with undefined behaviour otherwise)? (This lets compilers do provenance-based alias analysis)

Here DR260CR clearly says yes. Our experimental data shows cases where recent versions of GCC and ICC do assume non-aliasing of pointers with identical representation values but distinct provenance. This is incompatible with a concrete semantics of pointers (where they are fully characterised by their representation values). Tracking of provenance in the "abstract machine" is therefore clearly necessary to make these compilers sound with respect to the standard.

Q2. Can equality testing on pointers be affected by pointer provenance information?

This is also allowed according to DR260CR. We have observed GCC regarding two pointers with different provenance as nonequal (with ==) even though they have the same representation value. This happens in some circumstances but not others, so we suggest that whether pointer equality takes provenance into account or not should be made indeterminate in the standard (again to make the observed compiler behaviour sound with respect to the standard). Note that requiring equality to always take provenance into account would require implementations to track provenance at runtime.

The ISO C11 standard text is too strong here: 6.5.9p6 says "Two pointers compare equal if and only if both are [...] or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space", which requires such pointers to compare equal (reasonable pre-DR260CR, not not after it). We don't expect programmers to rely on that behaviour and GCC does not satisfy it, so, to be consistent with DR260CR and with the indeterminate behaviour we suggest, it should permit them to compare equal or non-equal.

Q4 Can one make a usable pointer via casts to unsigned long and back?

ISO C11 optionally allows implementations to provide the type intptr_t (along with an unsigned variant) with guaranteed round-trip properties for pointer/integer casts. However it seems to be common practice (e.g. in Linux) to extend these properties to unsigned long, when its implementation is large enough. We suggest that this be permitted iff that is the case.

Q5 Must provenance information be tracked via casts to integer types and integer arithmetic?

Given the type intptr_t, this asks whether one can return to a concrete view of pointers, by casts to intptr_t followed by integer arithmetic and casting back to a pointer type. Here again, we observe GCC behaving the same as with Q1, reasoning that pointers obtained in this way cannot alias even if they have the same numerical values. This observation is reinforced by the GCC documentation, which mentions an "original pointer" associated to integer values cast to pointer type, so the answer seems to be "yes". This leads to many more questions regarding the specifics of how provenance information affect the semantics of each integer operator. Some of these are discussed in the next subsection and the remainder are given a complete treatment in the summary of our memory model proposal at the end.

Q6 Can one use bit manipulation and integer casts to store information in unused bits of pointers?

The standard leaves conversions between integer and pointer types implementation-defined (6.3.2.3p{5,6}), but it is common practice to use unused pointer bits (either low-order bits from alignment requirements or high-order bits beyond the maximum address range). We suggest that the set of unused bits for pointer types of each alignment should be made implementation-defined, to make this practice legal.

Moreover, where the standard does give a guarantee, e.g. for round-trips through intptr_t (7.20.1.4p1), it says only that the result "will compare equal". In a provenance-aware semantics, that may not be enough to make the result usable to reference memory; the standard text should be strengthened here to guarantee that.

Q7 Can equality testing on integers that are derived from pointer values be affected by their provenance?

DR260CR does not address this. GCC did at one point do this, but it was regarded as a bug and fixed. We have observed it in Clang. We believe that integer equality testing should not be affected by provenance, i.e. "no".

Q8 Should intra-object pointer subtraction give provenance-free integer results?

DR260CR does not address this, but it is uncontroversially "yes": an intra-object pointer subtraction, say between the addresses of two elements of an array, should give a provenance-free integer offset that can then be used for indexing into this or other arrays.

Q9 Can one make a usable offset between two separately allocated objects by inter-object subtraction (using either pointer or integer arithmetic), to make a usable pointer to the second by adding the offset to the first?

This is asking about pointers that have multiple provenances, which is not addressed in DR260CR or current GCC or Clang compiler documentation. Our experiments and our survey responses both suggest that compilers do not in general support it, and we imagine it is uncommon in practice. However, there do seem to be specific important use cases, including the Linux and FreeBSD per-CPU variable implementations - though it is unclear whether these are between multiple allocations in the C sense. These might be dealt with by an attribute such as the GCC may_alias - though the documentation for that refers only to type-based alias analysis, not to "provenance-based" alias analysis. This needs further discussion, but we tentatively suggest "no".

(Given that, Q10 is not useful)

Q11 Is the XOR linked list idiom supported?

This also a question about pointers with multiple provenance, which (in a provenance-aware semantics) are needed to make the idiom legal. While it may have been common practice when memory space was more limited, it seems no longer to be the case. We don't know whether current compiler alias analysis permits it or not. Our suggested semantics would not allow it.

Q12 For arithmetic over provenanced integer values, is the provenance of the result invariant under plus/minus associativity?

For our suggested semantics, the answer is "yes", which seems the most intuitive for programmers. Again the status of current compiler implementation needs to be checked.

Q13 Can one make a usable copy of a pointer by copying its representation bytes using the library memcpy?

The ISO C11 text does not explicitly address this. In a pre-provenance semantics, before DR260, it did not need to, but now (as it surely should be allowed) one needs to guarantee that the result has the appropriate provenance to make it usable.

One could allow it by special-casing memcpy() to preserve provenance, but the following questions suggest a less ad hoc approach.

Q14 Can one make a usable copy of a pointer by copying its representation bytes (unchanged) in user code?

ISO C11 and DR260CR again do not mention this explicitly (though the 6.5p6 effective type text weakly implies it is allowed). We believe it is widely relied on.

Our proposed semantics makes it legal by regarding each representation byte (as an integer value) as having the provenance of the original pointer, and the result pointer, being composed of representation bytes with that provenance, as having the same.

Q15 Can one make a usable copy of a pointer by copying its representation bytes by user code that indirectly computes the identity function on those bytes?

Whether this is supported by ISO C11 is unclear. Programs that explicitly swap out memory to disc and swap it back in would require this to work. Our proposal allows it to some extent.

Q16 Can one carry provenance through dataflow alone or also through control flow?

One might imagine forging pointers via control-flow, e.g. if testing equality of an unprovenanced integer value against a valid pointer permits the integer to be used as if it had the same provenance as the pointer. We don't expect that this is relied on in practice, and our proposed semantics does not permit it - we track provenance only through dataflow. This needs to be discussed with respect to current compiler analysis behaviour.

Q17 Is type punning between integer and pointer values allowed?

Q18 Does type punning between integer and pointer values preserve provenance?

The ISO standard says little about these questions, but our survey responses suggest that it is fairly common for implementations to satisfy them and for programmers to exploit them. Following the same choices as we make for provenance of representation bytes, our suggested model permits them.

Q19 Can one make a usable pointer via IO?

This is allowed in ISO C11 though the use of the %p conversion specifier for fprintf() and fscanf(). Our survey results are clear that such marshalling is used in practice. Given the following quote from the standard:

"If the input item is a value converted earlier during the same program execution, the pointer that results shall compare equal to that value"

We suggest that the pointers output during an execution should be recorded along with their provenance, in order to be reinjected when these representation value are input later during the execution.

Q20 Can one make a usable pointer from a concrete address (of device memory)?

ISO C11 makes this undefined behaviour, and this is consistent with an abstract view of pointers. However embedded programs and others dealing with memory-mapped devices do require this to work. Our suggestion is to introduce an implementation-defined set of addresses (which may depend on linking) for which the creation of such pointers be allowed.

The basic idea is to associate a provenance with every pointer value, essentially identifying the original allocation the pointer is derived from. This is for the "C abstract machine" as defined in the standard: compilers might rely on provenance, but one would not expect normal implementations to record or manipulate provenance at runtime (though dynamic or static analysis tools might).

Pointer values and integer values both carry a provenance, either the "empty" provenance, a single provenance ID, or the "wildcard" provenance.

On every allocation (of objects with static, thread, automatic, and allocated storage duration), we choose a fresh provenance ID (unique in the entire execution), and the resulting pointer value carries that single provenance ID.

At any access via a pointer value, its numeric address must be consistent with its provenance, with undefined behaviour otherwise. In particular: access via a pointer value with empty provenance is undefined behaviour (except where the numeric value is within an implementation-defined set of "device" memory addresses); access via a pointer value with a single provenance ID must be within the corresponding allocation; access via a pointer value with wildcard provenance must be within some currently live object.

NULL pointers constructed from integer constant expressions have the empty provenance.

Whether pointer equality comparison (with == or !=) takes the associated provenances into account or not is indeterminate.

Pointer relational comparison (with <, <=, >, and >=) is unaffected by the associated provenances.

All casts among pointer and integer types preserve provenance.

The result of subtraction of two pointer values is an integer value with empty provenance, irrespective of the operand provenances (in particular, irrespective of whether they point within the same object or not - but if not, the resulting offset is not usable for moving between the two objects).

The result of an addition or subtraction of a pointer value and an integer value has the provenance of the pointer value

The result of operations on integer values is as follows: the result of the address-of has the corresponding provenance of the object associated with the lvalue, for non-function-pointers, or empty for function pointers. the provenance of the result of the unary * operator is whatever was stored integer unary +, unary -, and ~ operators preserve the original provenance; logical negation ! gives a value with empty provenance. sizeof and _Alignof operators give values with empty provenance multiplicative and additive operators, bitwise AND, bitwise exclusive OR, and bitwise inclusive OR operators have provenance as follows: if both have empty provenance, the result has that if exactly one argument has non-empty provenance, the result has that if both have the same single provenance, the result has that if they have different single provenances, the result has empty provenance if one has a single provenance and the other the wildcard provenance, the result has the wildcard provenance if both have wildcard provenance, the result has that Summarising: | empty ID ID' wildcard ---------+------------------------------------- empty | empty ID ID' wildcard ID | ID ID empty wildcard ID' | ID' empty ID' wildcard wildcard | wildcard wildcard wildcard wildcard Note that this use of empty for the ID/ID' cases is liberal as far as alias analysis is concerned, but requires programmers to be conservative. One could instead make combinations of values with different single provenances to have the wildcard provenance, making them legal to use for accessing memory. bitwise shift has the provenance of its first operand relational, equality, logical AND, logical OR, and constant expressions give values with empty provenance prefix increment and decrement operators follow the pointer or integer arithmetic rules above the conditional operator gives the provenance of the second or third operand as appropriate; simple assignment gives the provenance of the expression; compound assignment follows the pointer or integer arithmetic rules above; the comma operator gives the provenance of the second operand,

The representation bytes of a pointer have the provenance of the pointer.

The provenance of a pointer constructed by writing individual representation bytes if all have the empty provenance, the result has the empty provenance if one has a single provenance and the others all have either the same single provenance or empty provenance, the result has the single provenance if two have distinct single provenance, the result has empty provenance if any have a wildcard provenance, the result has the wildcard provenance As above, the distinct-single-provenance case could be changed to give an wildcard provenance for combinations of distinct single provenances.

Provenance is not propagated via control flow (e.g. by conditionals that check equality of a pointer value).

To permit pointers to be constructed via IO (e.g. via %p or by marshalling and unmarshalling their representation bytes, in the same runtime), there are several choices, and it would be useful to know what compiler alias analysis actually does here.

Pointers with the may_alias attribute have wildcard provenance.

[See Questions 12/15 and 13/15 of our survey, and Section 2.13 (Q28-30) of our N2013]

ISO C11 permits the construction of null pointers by casting from integer constant zero expressions, but not from other integer values that happen to be zero (6.3.2.3p3): "An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function."

However, in practice it seems that code often does rely on being able to produce null pointers from other zero integer values, or from all-zero representation bytes, and a survey respondent suggests this is sound for all current GCC targets. The only exceptions we are aware of are now-obsolete segmented memory systems (IBM AS/400?) in which pointer representations included a non-zero segment selector, and perhaps some current embedded systems. Summarising, our notes30.pdf asks:

Q28 Can one make a null pointer by casting from a non-constant integer expression?

Q29 Can one assume that all null pointers have the same representation?

Q30 Can null pointers be assumed to have all-zero representation bytes?

In ISO C11 the answers are (no, no, no). All these could be reconciled with practice simply by making the set of null pointer representations an implementation-defined set, thus requiring it to be documented, and allowing Q28 iff that set contains just a single element with all-zeros representation.

Another common idiom in practice is to use the addresses of members of a NULL struct pointer to calculate their offsets, as in Q36 below. It's unclear whether or not that is well-defined in ISO C11, but there seems no reason to forbid it, at least where the implementation-defined set of null pointer representations is just the singleton zero.

Q36 Can one use the addresses of members of a NULL struct pointer in place of offsetof?

[See Question 7/15 of our survey, and Section 2.12 (Q25-27) of our N2013]

Here the ISO standard seems to be significantly more restrictive than common practice. In ISO C11 there is first a type constraint: 6.5.8p2 "both operands are pointers to qualified or unqualified versions of compatible object types."

Then 6.5.8p5 allows comparison of pointers only to the same object (or one-past) or to members of the same array, structure, or union: "When two pointers are compared, the result depends on the relative locations in the address space of the objects pointed to. If two pointers to object types both point to the same object, or both point one past the last element of the same array object, they compare equal. If the objects pointed to are members of the same aggregate object, pointers to structure members declared later compare greater than pointers to members declared earlier in the structure, and pointers to array elements with larger subscript values compare greater than pointers to elements of the same array with lower subscript values. All pointers to members of the same union object compare equal. If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P. In all other cases, the behavior is undefined."

(Similarly to 6.5.6p7 for pointer arithmetic, 6.5.8p4 treats all non-array element objects as arrays of size one for this: 6.5.8p4 "For the purposes of these operators, a pointer to an object that is not an element of an array behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type.")

This rules out comparisons between pointers to two separately allocated objects, and comparisons between a pointer to a structure member and one to a sub-member of another member.

In practice, comparisons between separately allocated objects seem to be commonly relied on, e.g. for lock ordering, to build collections, and for functions like memmove. A survey respondent said it is "is likely to work in practice" for GCC. The only case where it might not work that we are aware of is that of implementations on segmented architectures, where the obvious runtime comparison might ignore the segment ID. If that is still a real issue, we suggst it be made an implementation-defined question whether inter-object relational comparison is permitted, to bring the ISO standard in line with implementation and usage practice.

For comparison between a pointer to a structure member and one to a sub-member of another member, of compatible object types, we see no reason to forbid it, and suggest the standard is changed accordingly.

For comparison between pointers to objects of incompatible types, the only cases we can imagine where it might not work are rather exotic implementations, though real code may normally cast to (void * ).

Summarising, for the following three questions the current ISO C11 position is (no, no, no), while we suggest (implementation-defined, yes) for the first two and do not have a position on the last.

Q25 Can one do relational comparison (with <, >, <=, or >=) of two pointers to separately allocated objects (of compatible object types)?

Q26 Can one do relational comparison (with <, >, <=, or >=) of a pointer to a structure member and one to a sub-member of another member, of compatible object types?

Q27 Can one do relational comparison (with <, >, <=, or >=) of two pointers to structure members with incompatible types?

[See Section 2.2.4 (Q6) of our N2013]

It is common in practice to use unused low- or high-order bits in pointers to store additional information, e.g. via casts to an integer type and bitwise operations, but the status of this in the ISO C11 standard is unclear.

Q6 Can one use bit manipulation and integer casts to store information in unused bits of pointers?

We suggest that the set of unused bits of pointer types that can be used for such purposes be required to be implementation-defined (perhaps for each possible alignment, or in the limit for each such type).

[See Question 9/15 of our survey, and Section 2.14 (Q31-33) of our N2013]

The ISO standard permits only very limited pointer arithmetic, restricting the formation of pointer values. First, there is arithmetic within an array: 6.5.6 Additive operators (6.5.6p8,9) permits one to add a pointer and integer (or subtract an integer from a pointer) only within the start and one past the end of an array object, inclusive. 6.5.6p7 adds "For the purposes of these operators, a pointer to an object that is not an element of an array behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type". Subtraction of two pointers is permitted only if both are in a similar range (and only if the result is representable in the result type).

Second, 6.3.2.3p7 says that one can do pointer arithmetic on character-type pointers to access representation bytes: "[...] When a pointer to an object is converted to a pointer to a character type, the result points to the lowest addressed byte of the object. Successive increments of the result, up to the size of the object, yield pointers to the remaining bytes of the object."

In practice the survey responses make clear that there are real differences here. On the one hand, much real code does transiently construct out-of-bounds pointer values by pointer arithmetic, bringing them back into bounds before using them for accesses; most respondents (73%) assume this works, and the clang -fsanitize=undefined deliberately doesn't check for it. On the other hand, others reply that it will not in general work with current compilers, e.g.: "this is not safe; compilers may optimise based on pointers being within bounds".

Possible tricky cases include (1) hardware that does bounds checking, (2) platforms where a transient insufficiently-aligned value cannot be represented at the given pointer type, (3) pointer wrapping at values less than the obvious word size, and (4) pointer arithmetic overflow. How much these matter in current practice is unclear to us.

Summarising, the ISO C11 answers to the following are (no, no, no); real code relies widely at least on the first; and it is not always guaranteed to work in current implementations - but it is unclear when it does. The question (for compiler authors) is thus to articulate when it is guaranteed to work, preferably in a way that can be codified in the standard.

Q31 Can one construct out-of-bounds (by more than one) pointer values by pointer arithmetic?

Q32 Can one form pointer values by pointer addition that overflows?

Q33 Can one assume pointer addition wraps on overflow?

[See Question 8/15 of our survey, and Section 2.17 (Q43) of our N2013]

The ISO C11 text makes all pointers to an object indeterminate at the end of its lifefile: 6.2.4 Storage durations of objects says (6.4.2p2) "If an object is referred to outside of its lifetime, the behavior is undefined. The value of a pointer becomes indeterminate when the object it points to (or just past) reaches the end of its lifetime."

This makes accesses via that pointer undefined behaviour. In the absence of trap representations at pointer types, it also means that comparisons, representation-byte accesses, pointer arithmetic, and member offset calculations will not have useful results - depending on the choices elsewhere, they may give unspecified values - but will not give undefined behaviour (other authors differ on this last, regarding all those as giving undefined behaviour).

(This side-effect of lifetime end on all pointer values that point to the object is a very unusual aspect of ISO C compared with other programming language definitions.)

However, in practice most survey respondents (66%) believe this will work, and they give a number of use-cases in real code.

Then one also has to consider what happens to integer values derived from pointers (e.g. intptr_t values cast from pointers, or pointer representation bytes) when the lifetime of the original object ends.

Summarising: for ISO C11 the following is "no", but for C in practice it seems to be commonly "yes". It's unclear what current implementations do. We suggest that this be made an implementation-defined property, expecting most implementations to support such equality tests (and also access to representation bytes etc.). And we suggest that all integer values are left unchanged at lifetime end.

Q43 Can one inspect the value, (e.g. by testing equality with ==) of a pointer to an object whose lifetime has ended (either at a free() or block exit)?

[See Question 11/15 of our survey, and Section 4 (Q73-81) of our N2013]

Q75 Can an unsigned character array with static or automatic storage duration be used (in the same way as a malloc'd region) to hold values of other types?

The survey makes clear that this is widely relied on, but the ISO standard disallows it (6.5p7), and we also see, for GCC "No, this is not safe (if it's visible to the compiler that the memory in question has unsigned char as its declared type)".

The question (for compiler authors) is thus to articulate when it is guaranteed to work, preferably in a way that can be codified in the standard.

There are several more questions in Section 4 (Q73-81) of our N2013] that we postpone for now.