Obviously, these chips are not aimed at consumers, even if you have the bread. They're designed to accept up to 24 DDR4 RAM modules in a dual-CPU configuration (up to 3TB), or work with Intel's "persistent memory" Optane chips, which act as both RAM and a very fast SSD. With 24 of those, you could have a ridiculous 12TB of system memory, enough to run a pretty complicated simulation.

Intel is likely using multiple dies for economic reasons. The more transistors you try to squeeze on a single die, the more likelihood you'll have a defect. By manufacturing smaller dies and joining them together, Intel can get higher yields and, ergo, more profit.

Intel claims performance gains of 20 percent over its current Xeon CPUs and up to 3.4 times higher than AMD's EPYC server chips for certain tasks. It's interesting that Intel has yet to say whether the chips are hyperthreaded like its current gen, however. Its chips have been plagued by security problems, caused in part by hyperthreading, but the new models were redesigned to avoid Spectre and Meltdown issues, Intel said.

The chips are still using 14-nanometer tech, as Intel's 10-nanometer products aren't yet ready for prime time. Intel's engineers have jumped through a lot of hoops to keep the company's CPUs competitive by increasing efficiency and adding new types of instructions. While Intel is still making a lot of money, it's getting behind its main rival AMD. Tomorrow, AMD is expected to announce new EPYC server chips based on its upcoming 7-nanometer tech, including a 64-core model.