Every company has secretive and not-so-secretive projects. For example, ATI Eyefinity multi-display technology was a hidden part of the silicon in the Evergreen series, with only two executives knowing about it. Three years later, we can now say that Eyefinity pushed both Intel and NVIDIA where they did not expect to go, and AMD reaped the rewards from that program.

In the case of NVIDIA, the story is more complicated. Last year saw the official announcement of “Project Denver”, its 64-bit architecture fully compliant with the ARMv7/v8 ISA (Instruction Set Architecture). The announcement saw people like John Carmack singing the words of praise. However, the history of Project Denver goes long before ARM was at play. Originally, Project Denver was an intelligent RISC architecture capable of executing ARM, x86, MIPS ISAs. The focus naturally, was on getting the x86 done right and this is where the ex-Stexar team lost years in development. The first product based on Project Denver core was a dual-core PD with a cluster set of Fermi cores, scheduled to arrive on market in 2010 and fight Intel’s Arrandale, originally scheduled as the dual-core x86 part with Larrabee GPU in it.

After it became clear that the future lies in low-power processors, NVIDIA Colorado-based team abandoned the work on x86 instruction set architecture and pushed through with “reinventing” the Project Denver core to work with ARM ISA, most notably to have 64-bit capable, ARMv8 compatible part. The first project based on Project Denver should be the T50 e.g. “Tegra 5”, combining Project Denver cores with the Maxwell GPU utilizing 20nm-SLP process at the Samsung Electronics Fab in Austin, Texas. Roughly six months ago, NVIDIA did tapeout its first test wafers from Austin Fab. Will they keep TSMC or go with a dual-foundry approach, that is not the subject of this story.

And this brings us to Project Boulder, for which interviewers are asking people to join with the question, “do you want to join the ultra-successful Tegra team in NVIDIA”, only for those same engineers to find out that they’re heading to work on a project inside the GeForce business unit and won’t have much relation to the Tegra team. In a way, the company is doing the right thing by keeping the focus on low-power architectures in the Tegra team, and high-performance in the GPU team.

According to our sources in the know, Project Boulder represents NVIDIA’s “claim to fame” in the server space. This high-performance part doesn’t care as much about low-power as it cares about its feeding capabilities. NVIDIA doesn’t want to bundle its Tesla and Quadro parts with Intel Xeon or AMD Opteron parts, as that reduces the revenue the company receives and exposes it to risks such as Xeon Phi (Larrabee Redux) and FirePro S (server versions are finally available).

In a nutshell, we’re probably going to see an 8-16 core SoC with high-bandwidth interconnect, utilizing high-bandwidth memory. Given the timeframe of its arrival (2014), we’d wager it’s too early for GDDR6 memory and it will probably have to rely on DDR4, expanding it with memory space on parts such as ioFX (both NVIDIA and Fusion-io recently started to work together on expanding the memory addressing directly to GPUs/SoCs). Details about the actual Project Boulder silicon are scarce right now, and we would rather wait and see instead of throwing out guesstimations.

At the same time, similar story is taking place at Apple, who wants to replace Intel products from its consumer notebook and desktop products. While this won’t be a possibility before iOS matures to the level of replacing the Mac OS (guesstimate would be iOS8-iOS9).

This news article is part of our extensive Archive on tech news that have been happening in the past 10 years. For up to date stuff we would recommend a visit to our PC News section on the frontpage.