The existing answers showing the code generated by a particular compiler for a particular target with a particular set of options do not fully answer the question -- unless the question was asked in that specific context ("Which is faster using gcc 4.7.2 for x86_64 with default options?", for example).

As far as the language definition is concerned, in the abstract machine while (1) evaluates the integer constant 1 , and while (2) evaluates the integer constant 2 ; in both cases the result is compared for equality to zero. The language standard says absolutely nothing about the relative performance of the two constructs.

I can imagine that an extremely naive compiler might generate different machine code for the two forms, at least when compiled without requesting optimization.

On the other hand, C compilers absolutely must evaluate some constant expressions at compile time, when they appear in contexts that require a constant expression. For example, this:

int n = 4; switch (n) { case 2+2: break; case 4: break; }

requires a diagnostic; a lazy compiler does not have the option of deferring the evaluation of 2+2 until execution time. Since a compiler has to have the ability to evaluate constant expressions at compile time, there's no good reason for it not to take advantage of that capability even when it's not required.

The C standard (N1570 6.8.5p4) says that

An iteration statement causes a statement called the loop body to be executed repeatedly until the controlling expression compares equal to 0.

So the relevant constant expressions are 1 == 0 and 2 == 0 , both of which evaluate to the int value 0 . (These comparison are implicit in the semantics of the while loop; they don't exist as actual C expressions.)

A perversely naive compiler could generate different code for the two constructs. For example, for the first it could generate an unconditional infinite loop (treating 1 as a special case), and for the second it could generate an explicit run-time comparison equivalent to 2 != 0 . But I've never encountered a C compiler that would actually behave that way, and I seriously doubt that such a compiler exists.

Most compilers (I'm tempted to say all production-quality compilers) have options to request additional optimizations. Under such an option, it's even less likely that any compiler would generate different code for the two forms.

If your compiler generates different code for the two constructs, first check whether the differing code sequences actually have different performance. If they do, try compiling again with an optimization option (if available). If they still differ, submit a bug report to the compiler vendor. It's not (necessarily) a bug in the sense of a failure to conform to the C standard, but it's almost certainly a problem that should be corrected.

Bottom line: while (1) and while(2) almost certainly have the same performance. They have exactly the same semantics, and there's no good reason for any compiler not to generate identical code.

And though it's perfectly legal for a compiler to generate faster code for while(1) than for while(2) , it's equally legal for a compiler to generate faster code for while(1) than for another occurrence of while(1) in the same program.

(There's another question implicit in the one you asked: How do you deal with an interviewer who insists on an incorrect technical point. That would probably be a good question for the Workplace site).