There is no actual evidence of any backdoor and it would be nigh impossible to check anyway, save by "opening" the case and inspecting all the transistors one by one, with an electronic microscope -- there are billions of them, so this is not feasible. Backdoors of that magnitude tend to be revealed by disgruntled insiders, not by external analysis.

However, we can discuss the physical possibility. The articles you link to are confused messes of poorly used terminology, based upon some misunderstanding of about three concepts. These are:

microcode updates;

reflashable firmware;

hardware random number generators.

Microcode is instructions for a CPU-within-the-CPU. This relates to the traditional way CPU were implemented: a number of elementary blocks, which, for each instruction, are activated or not; the microcode describes the "activation sequences". Microcode can be thought as the software implementation of an emulator which interprets the opcodes in RAM that we think of as "machine language". However, in modern CPU, most opcodes are "hardwired" and microcode is used only for complex operations like fsin (computes the sine function on a floating point operand in x86 CPU). This answer contains a much more detailed description of what microcode is and what it can do.

In the CPU, the microcode is stored in ROM but is copied upon boot in a small static RAM element (still within the CPU), which is faster but also updatable. Thus, upon boot, the OS can upload some new microcode into the CPU, and it will be used until next poweroff. OS do that. The microcode format is very specific to each version of each CPU, it is not standard, it is not documented, and it is protected by unspecified cryptographic algorithms, so that a microcode update is, from the point of view of the OS, an opaque blob straight from Intel or AMD.

Microcode opens the possibility of a hidden backdoor. The CPU, by itself, has no network; it runs within the context of the current RAM and hardware and the OS orchestrates all communications. If a backdoor is planted in the machine, it still has to be exploitable from the outside, otherwise it is useless. the story, here, hinges on the opaqueness of the blob: supposedly, the NSA, working with Intel, could craft some specific microcode update which, when uploaded into the CPU, makes it scan the RAM to explore the OS kernel structures and alter them, adding a more common software-based backdoor. This microcode update would then be handed over to Microsoft, to be included in the next Windows update. In that case, Microsoft would be the victim, their OS unwillingly pushing the backdoor code into the CPU.

Such activity is risky, because of the possibility of detection of the runtime manipulation of OS code and structure. It suffices that some amateur somewhere plays with DMA to dump physical RAM into some other device and then uses a debugger to explore the live OS code. Spy agencies, as a rule, abhor risk. Thus, while theoretically possible, I deem the planting of a microcode-powered backdoor rather improbable: not that it cannot be done (it can), but it seems exceedingly hard to do properly. In my opinion, NSA would resort to such games only if easier paths have been closed, and it turns out that they have not. See below.

Firmware updates are actually out of scope of the question. This is not about the CPU or Intel; this is about pushing malicious code into the various Flash chips in the hardware elements which have such chips -- namely, many peripheral but not the CPU itself. Some demonstration called Rakshasa was made and gained some fame, although the implications are rarely well understood or explained. In particular, it does not take any kind of NSA-like political power to make such a virus; quite the contrary: pushing code into reflashable firmwares is open to everybody who could get his own malicious code to run on the machine with sufficient privileges. In that matter, what NSA or Intel could do, some schmuck programmer in Moldavia could do, too.

(I apologize if you are a Moldavian developer and felt offended.)

The yummy part, though, is in the possibility of rigging the PRNG. All crypto-related activities, in particular key generation, ultimately come down to the production of pseudo-alea which should be indistinguishable from pure randomness. Operating systems do that by gathering hardware events such as precise timing (to the nanosecond) of the arrival of hardware interrupts. Hardware interrupts occur for every key stroke, mouse movement, network packet, and so on. There is an assumed "jitter" in the exact timing of these occurrences, which (that's the crucial point) cannot be measured outside of the computer with enough precision, so that's "unknown data" (for the attacker) which can be used to power the PRNG.

Gathering hardware events like that is challenging in some contexts, especially virtual machines in servers, fresh upon boot (since they only have fictitious hardware, they tend to be much more reproducible, leading to a guessable PRNG state). To make things better, CPU vendors now provide hardware RNG which include some specific electrically unstable circuitry (like a Zener diode in reverse): such hardware is integrated in the CPU, accessible from VM, and produces "true randomness" with an acceptable rate.

Internally, the random events from the randomness-generating circuitry is first post-processed in a specific circuit which turns the physical measures into a sequence of bits which can be handed to the OS (which will then use hash functions to obtain a seed for its PRNG). It is conceivable that the post-processing stage be modified so that:

the hardware RNG will look random, and be indistinguishable from true randomness;

unless you know some secret "key" also embedded in the circuit, which would allow to unravel the system and rebuild the random events;

and even if the circuit was completely reverse-engineered, it could still look like a perfectly honest mistake, not a deliberate rigging.

So this would give a "backdoor" which leaves no trace (no need of microcode updates to push through the OS updates), is not detectable from the outside, and even comes with plausible deniability. Practical implementation would be a kind-of hash function which "unfortunately" includes a carry-propagation bug which reduces the internal state to something like 50 bits or so, thus within range of exhaustive search. The reference implementation of bcrypt had such a bug, and nobody saw it for several years, despite the implementation being software and opensource (!), and yet nobody supposed that the bcrypt author did it on purpose.

If I were the NSA or some other similar agency, and I wanted a backdoor in every computer on the planet, I would do it through the hardware RNG. This is the low-risk path.