$\begingroup$

1 - How feasible is it that the chip's manufacturer can predict the output of this PRNG when it passed tests from the people applying the use of this RdRand instruction in kernels?

As nightcracker correctly stated, any strong cryptographic PRNG will produce a stream of numbers that pass statistical tests.

However, the manufacturer has some constraints:

Independent tests will be performed on multiple processors that are set up in an identical manner, so each processor must produce different outputs.

Any given processor must produce a different output stream on each power up.

A simple scheme would be to use the processor serial number as an input to the PRNG to ensure different processors had different outputs, and have an undisclosed non-volatile register (e.g. a power-on counter) to ensure each boot was different.

A scheme such as this would probably resist any attempts at analysis using only its outputs: a standard cryptographic PRNG with a global secret (common across all processors), processor ID and power on counter as inputs. At this point, a large scale surveillance infrastructure on observing a new user would only have a space of a few millions of possible processor IDs, plus a few hundreds or thousands of possible boot counts. This could all be easily precomputed too, so would be very practical to hook into a surveillance infrastructure with today's computing power. (Once a user's processor ID and boot count have been identified once, it is of course much easier to keep track of this than have to do a full search each time).

However, the odds are that Intel aren't betting their international sales solely on another fab not having the inclination to open up their chip and check this (e.g. ARM would have a strong incentive to identify such foul play). Update: but they could be compelled by the government to put such a back door in whether it is in their commercial interests or not. Update 2: They, or their fab, could also use stealthy dopant-level modifications to make it extremely hard to detect the modifications, even by someone with Intel-like capabilities (see the first case study, Chapter 3 in the referenced paper).

I'm not an expert in microprocessor hardware, so can't comment on techniques that might introduce biases or other predictable features without being detected. One possible backdoor might be to severely constrain the next requested output from RdRand only after performing a computation such as would be needed to verify the authenticity of a certificate signed by one of a set of long-lived root CAs (perhaps China's CNNIC would be a useful candidate?).

2 - If the chip's manufacturer can predict the output of the PRNG to some extent, how feasible is it that they can decrypt any https traffic between two systems using their chips? (Or anything else requiring randomness, https is only an example.)

Being able to predict that the output of RdRand is within a searchable subset of possible outputs doesn't alone mean an attacker could break the system - it depends how that output is used. For example if the consuming application uses it as just another optional input to its entropy pool, then being able to predict that input means the user is no better than without RdRand, but equally is not worse off.

CodesInChaos points out that Linux has used RdRand directly at times; Intel are also encouraging direct use of the instruction. So it is not unreasonable to imagine a browser or other TLS client that uses output from RdRand as its sole source of entropy. If this is the case then an observer who can predict the output from RdRand can indeed compromise your security.

Most cryptosystems fail if the entropy input can be predicted, including SSL/TLS.

To pick a couple of examples in use by popular websites from the many possible TLS key exchange options: