What’s the difference between the following length and pointer pairs?

size_t len1 = 0 ; char * ptr1 = NULL ; size_t len2 = 0 ; char * ptr2 = malloc ( 0 ) ; size_t len3 = 0 ; char * ptr3 = ( char *) malloc ( 4096 ) + 4096 ; size_t len4 = 0 ; char ptr4 [ 0 ] ; size_t len5 = 0 ; char ptr5 [] ;

In many circumstances, all five result in identical behavior. Other times, the behavior can vary wildly. One obvious difference is whether the pointer can be passed to free , though I’m going to ignore that difference for the rest of this post.

The first case is interesting, but sufficiently different to postone discussion for later.



malloc(0)

The behavior of malloc(0) is implementation defined by the standard. The two legal possibilities are to return a null pointer or a unique pointer. In many implementations, the second option is accomplished by means of internally increasing the length to 1 (which is then likely rounded up to 16 or so). Legally, one cannot deference this pointer, although in practice some bytes are allocated which means that won’t crash.

Returning NULL introduces the possibility of an interesting programming error. Lots of code, including some I’ve written myself, assume that a NULL return from malloc is an error.

if (( ptr = malloc ( len )) == NULL ) err ( 1 , "out of memory" ) ;

If len is zero, this will incorrectly report an error and exit unless an additional && len != 0 check is added. One may also subscribe to the “don’t check malloc” school of thought.

OpenBSD malloc handles zero specially in a different way. Zero sized allocations return chunks from pages that have been mprotected with PROT_NONE. Any attempt to dereference this pointer will crash.

Note that the unique pointer requirement prohibits an implementation from cheating.

int thezero; void * malloc ( size_t len ) { if ( len == 0 ) return &thezero; } void free ( void * ptr ) { if ( ptr == &thezero ) return ; }

That’s not a conforming implementation because successive calls will return the same value.

So case two is approximately the same as either case one or case three, depending on implementation.



other cases

Assuming malloc doesn’t fail, cases three, four, and five are approximately equal in most circumstances. The primary difference would be using sizeof(ptr) / sizeof(ptr[0]) , such as in a loop construct. That either gives the wrong answer, the right answer, or no answer (compilation failure).

Case four may not be technically permitted by the standard, but compilers will likely eat it anyway.

The biggest difference between these cases and the first case is that they will pass a null check, which can have semantic meaning. It’s the difference between an empty array and a missing array. Similarly, although it takes one byte of storage, the empty string is not a null string.



null objects

Now we return to the discussion of case one and null objects. Consider this call.

memset ( ptr , 0 , 0 ) ;

Set 0 bytes of ptr to 0. Which of the five pointers above make this a valid call? Three, four, and five for certain. Two if it’s a unique pointer. But what if ptr is NULL?

The C standard has this to say in the section on “Use of library functions”.

If an argument to a function has an invalid value (such as a value outside the domain of the function, or a pointer outside the address space of the program, or a null pointer, [...]) [...], the behavior is undefined.

The section on “String function conventions” clarifies further.

Where an argument declared as size_t n specifies the length of the array for a function, n can have the value zero on a call to that function. Unless explicitly stated otherwise in the description of a particular function in this subclause, pointer arguments on such a call shall still have valid values.

Hmmm. Sounds like memsetting 0 bytes of NULL is undefined. None of memset, memcpy, or memmove are documented to accept null pointers. As a counter example, however, the description for snprintf does say “If n is zero, nothing is written, and s may be a null pointer.” The POSIX read function similarly states that zero length reads are not an error, but an implementation may in that case still check other parameters for errors such as invalid buffer pointers.

What does this mean in practice? The most straightforward way to implement functions like memset or memcpy handles the zero length case by simply not entering the loop and not doing anything. Usually undefined behavior in C rears its head when something happens, but in this case it’s already been defined that nothing happens with normal pointers. Checking for abnormal pointers would require more work.

Detecting non-null abnormal pointers is usually quite difficult. memcpy certainly doesn’t try, letting the program crash if the pointer is bad. The read system call also doesn’t check. Instead it defers checking to the copyout function which sets up a fault handler to detect errors. While a null check could be added, null pointers are no less invalid to these functions than 0x1 or 0xffffffff pointers for which no special handling exists.



abort

What this means in practice is lots of code which (deliberately or inadvertently) assumes a null pointer and a zero length is valid. In response to a discussion at the s2k15 hackathon, I decided to run an experiment. I modified memcpy to log an error and abort when either pointer was NULL and installed a new libc.

Feb 11 01:52:47 carbolite xsetroot: memcpy with NULL Feb 11 01:53:18 carbolite last message repeated 15 times

Well, that didn’t take long. Wonder what it’s doing?

Feb 11 01:53:18 carbolite gdb: memcpy with NULL Feb 11 01:53:19 carbolite gdb: memcpy with NULL

And, uh, ok. This is obviously going to get real exciting real fast. Revert.



consequences

I started down this road because the intersection of “undefined but should work” code and optimizing C compilers has not been pleasant. A smart C compiler may observe a call to memcpy, flag both pointers as valid, and then delete any null checks. Forwards and backwards.

int backup; void copyint ( int * ptr ) { size_t len = sizeof ( int ) ; if (! ptr ) len = 0 ; memcpy ( &backup , ptr , len ) ; }

The above code will obviously misbehave if the null check is deleted by the compiler and a null pointer is passed.

I had some concerns about this because in the past similar dereference then check constructs have been optimized into security vulnerabilities. That’s bad news for a lot of software not ready for this level of standards compliance.

At first, I was unable to coax the compiler into deleting a null check after a memcpy “dereference”, but that’s not insurance it won’t happen. gcc 4.9 says it will optimize the check away. The OpenBSD package of gcc 4.9 (which contains various patches) won’t delete the check by default, even at -O3, but enabling -fdelete-null-pointer-checks results in the check being deleted. Not sure yet where clang stands. Initial testing says no, but that’s no guarantee. In theory, it can perform this optimization too.



related

Proposal for a Friendly Dialect of C

How much memory does malloc(0) allocate?

Optimization-unstable code

memcpy (and friends) with NULL pointers