For years there have been rumors that NVIDIA has a top-secret x86 processor project, and last November an NVIDIA exec all but confirmed that the company is looking at making an x86 chip at some point. That's why today's processor announcement from NVIDIA was both surprising and unsurprising.

No, NVIDIA didn't finally take the wraps off its x86 project—assuming that it hasn't been cancelled, that's still a secret. But the chipmaker did unveil Project Denver, a desktop-caliber ARM processor core that's aimed squarely at servers and workstations, and will run the ARM port of Windows 8. This is NVIDIA's first attempt at a real general-purpose microprocessor design that will compete directly with Intel's desktop and server parts.

The company has offered nothing in the way of architectural details, saying only that the project exists and that the company has had a team of crack CPU architects working secretly on it for some time. Indeed, NVIDIA CEO Jen-Hsuan's very brief but dramatic announcement of Denver raised more questions than it answered. However, I think I have a good idea of exactly what the first Denver-based chips will look like.

But before I try to put the pieces together, let me lay them all out on the table by walking back through the relevant section of the keynote.

Supercomputers, ARM, Windows 8

Jen-Hsuan set up the Denver announcement in a very peculiar way. After spending most of the event talking about mobiles, he suddenly put up a slide about supercomputers. Immediately, I flagged this as a change of topic to Tesla, but alarm bells started going off. Tesla is not a topic for CES—the Consumer Electronics show. In fact, supercomputing, aka high-performance computing (HPC) is not a topic for a CES presentation. This was truly the moment where I heard the record scratch and thought "what's happening here?"

Then Jen-Hsuan started talking about ARM and the power of the ARM ecosystem, with that supercomputing slide up the whole time. The press conference had now gone from curious to downright bizarre. And, just when it couldn't get any weirder, he put a Bloomberg quote about an ARM port of Windows up on the screen and basically confirmed the rumor by saying he was headed over to the Microsoft announcement shortly.

So he started out talking about supercomputers and servers, then he jumped to ARM, and then to Windows 8. I already had whiplash when he dropped the Project Denver bombshell.

After it sunk in that NVIDIA will produce a high-performance, desktop- and server-caliber, general-purpose microprocessor core, and that this processor core will power PCs running Windows, most of the picture had clicked into place. As of today, Wintel is officially dead as a relevant idea and a tech buzzword with anything more than historical significance. Sure, not much will change in the x86-based Windows PC market this year, but "Wintel" is really and finally dead as a term worth using and thinking with.

As I said, most of the picture is now complete, but there are still some pieces of this puzzle left on the table.

Missing pieces and TV

What still nags me about Denver is that ISSCC, not CES, is the place where new high-performance processor architectures are announced. This is especially true when those processors are aimed at servers and supercomputers. Announcing such a beast at CES is just strange.

The primary way that this timing makes any sense is that NVIDIA wanted to tie the unveiling to the Windows/ARM announcement, so they couldn't wait for ISSCC in September. They had to announce before Microsoft, so they did it at CES with only a few minutes to spare. I'm mostly happy with this explanation, but only mostly.

Maybe it's just the CE-heavy show environment, but I'm strongly inclined to believe that the Denver CPU is going to make its way into televisions, as well. HTML5 and Flash on an HDTV take real horsepower, both CPU and GPU—this is a job for a multicore, out-of-order processor. That's why Intel is going to be putting real x86 silicon in TVs, and the TV is going to have enough of an appetite for those clock cycles that this move will make sense.

Tegra 2 will be fantastic for phones and tablets, especially if you're looking for a phone that can double as a portable game console. But an Internet-connected TV can use even more horsepower than Tegra 2 can provide. That's where Denver could be NVIDIA's answer to Intel's CE-oriented SoC line.

Then there's Microsoft's Windows/ARM port. Microsoft clearly wants a piece of the Internet TV action—capturing this convergence moment was the whole point of the original Xbox effort within the company. But as much as the Xbox 360 does, it's still a game console, and Kinect takes it further in this direction. One of the upcoming Windows/ARM flavors could be aimed at the TV, and it could very well run on Denver. Such a combination would take on Intel's Smart TV effort directly.

Supercomputers, desktop gaming, and what Denver will look like

Ironically, despite the project's debut at CES, the consumer electronic piece of the Denver picture is the murkiest. When it comes to HPC and desktop gaming, things are a lot clearer, down to what the first Denver-based chips will look like when they launch.

A few months back, I wrote the following about AMD's Fusion project. You can read the following paragraphs, but substitute "ARM" for "x86":

It may turn out to be the case that few workloads really benefit from more than four cores, and most of those that do will run better on GPU hardware. If this happens, then why not put those four CPU cores on a high-end GPU? In other words, in a world where Moore's Law continues to drive transistor counts up but where exceeding four CPU cores offers rapidly diminishing returns vs. a four-core + GPU combination, the best arrangement would seem to be one that looks essentially like a large GPU with four CPU cores attached to it. Thinking about the ultimate x86 gaming system of 2015, a processor that combines four general-purpose CPU cores with a massive amount of GPU vector hardware and cache sounds ideal. With this arrangement, the relative amount of die area that goes to those four CPU cores can shrink as the (infinitely scalable) cache and vector hardware grow with transistor counts, to the point where you ultimately end up with a "GPU" that has four little CPU cores embedded in it. Of course, you wouldn't be able to physically turn on all that hardware at once, so dynamic power optimization would be key to making such a part work. But in terms of cost, efficiency, and raw performance, it would probably beat the pants off of a 12-core x86 chip + discrete GPU combination for games and most of the other tasks people care about.

Given that the Denver core is designed to be integrated onto the same die as a GPU, and that NVIDIA is pitching it as a server and supercomputer part, it seems likely that the above describes the route that they're taking with it.

The first Denver-based products will probably be two or four high-performance ARM cores, embedded in a much larger pool of GPU vector hardware. In subsequent product generations, the core count might stay at four (or even go up to six), while NVIDIA scales the vector and cache hardware out to the horizon.

To make such a chip live up to its full potential, NVIDIA will have to do a lot more than just design a top-notch ARM core and a top-notch GPU. The company will also have to link those parts together in an optimal way—this is not an easy thing to do, and it has a huge impact on overall performance. Sandy Bridge's graphics performance is a testament to what successful die-level integration can do; the Sandy Bridge GPU itself is no great shakes, but the way that Intel has clocked it and linked it to the rest of the die makes all the difference.

If NVIDIA can execute in all three areas—CPU design, GPU design, and SoC system design—then it could potentially make one killer gaming and supercomputing CPU. But this is a very tall order, and a lot of things could go wrong here. Right now, the GPU execution part is the only one where confidence is warranted based on a track record. With the system integration stuff and CPU part, NVIDIA is in uncharted territory. (The Tegra SoC part of NVIDIA's record isn't as relevant as you might think, because Denver is a different kettle of fish entirely.)

We'll keep you posted as more details unfold. Right now, I'm currently trying to line up a deep dive briefing on Denver's core, so stay tuned.