The question of when Apple will switch to building its own custom ARM CPU cores for its software ecosystem rather than using Intel and x86 comes up on a regular basis. On ET, we first covered the topic in 2011, and I’ve hit it several times in the intervening years. My answer has typically been some flavor of “theoretically yes, but practically (and in terms of the near future), no.”

A recent AppleInsider article does a good job of rounding up the reasons why Apple really might be taking this step soon. We’ve previously heard rumors that the company could launch such a product in 2020, and while rumors are not the same thing as a definite launch date, the piece is solid. It makes a reasonable case for why Apple may indeed take this step and references various real-world events, including Intel’s difficulties moving on past 14nm, Apple’s design efforts around GPUs and CPUs, the increasing complexity and capability of its SoCs, and the fact that Apple has built its own secondary chips, like the T2 controller.

All of these points are true, and it’s why I think the 2020 rumor deserves to be taken more seriously than the dates and ideas that we used to hear. But there is still a major piece of this puzzle that doesn’t get talked about often enough. Apple can introduce an ARM core running full macOS, but if it wants to replace x86 in its highest-end iMac Pro and Mac Pro products, it’s going to have to take on some significant design challenges that it hasn’t faced before.

Apple has built CPUs, yes. But it’s never tried to build, say, a 28-32-core ARM processor in a multi-socket system. To the best of my knowledge, Apple has never built a server-class chipset or designed a CPU socket for its own product families. During E3, I attended an AMD session on the evolution of its AM4 socket, and how carefully AMD had to work in order to design a 7nm product with chiplets to fit into a socket that initially deployed four identical CPU cores in a 28nm process node. Even if Apple intends to create a platform without upgradable CPUs, it will need to design its own motherboards. The socket design decisions that it makes will impact how quickly it can iterate the platform and how much work has to be done at a later time. Achievable? Absolutely. But not something one does overnight.

Using chiplets makes some aspects of CPU design easier, especially on leading-edge nodes, but it doesn’t simplify everything. Chiplets require interconnects, like AMD’s Infinity Fabric. Apple would need to design its own solution (there are no formal chiplet interconnect standards yet). There’s a lot of custom IP work to be done here if Apple wants to bring a part to market to replace what Intel offers in the Mac Pro.

One simple solution is for Apple to launch new ARM chips in laptops but keep desktop systems on Intel for the time being. In theory, this works fine, provided the ecosystem is ready for it and Apple can deliver appropriate binaries for applications. Software application support and user expectations could be tricky to manage here, but it’s doable. The problems for Apple, in this case, are making sure that its consumers understand any compatibility issues that might exist and that the new ARM-based products are clearly differentiated from the old x86 ones.

Is There a Reason for Apple Not to Build Its Own Mac CPUs?

There is, in fact, a reason for Apple not to build its own CPU cores for Mac. There is a non-trivial amount of work that must be done to launch a laptop/desktop processor line. Doing all of the work of developing interconnects, chiplets, chipsets, and motherboards from the ground up is more difficult and expensive than working with someone else’s pre-defined product standard and manufacturing. There’s an awful lot of work that Intel does on Core that Apple doesn’t have to do.

The question of whether it makes sense for Apple to move away from Intel CPUs is therefore partially predicated on what kind of money Apple thinks it can make as a result of doing so. Obviously capturing the value of the microprocessor can sweeten the cost structure, but capturing the value also means capturing the cost. When Apple was a non-x86 shop, its market share was significantly smaller than it is today, and the company gained some market share immediately after switching to x86. It is impossible to tell if it gained that share because its software compatibility was now much improved or because many of its systems, especially laptops, were now far more competitive with their Windows counterparts.

Apple has to consider that it will lose at least some customers if it moves away from x86 compatibility again, either because of software compatibility or because its new chips may not offer a performance improvement in specific workloads relative to Intel. The most valuable CPUs — the ones powering the Mac Pro — are also the most expensive to design and build. If Apple doesn’t think it can command the price premiums that Xeon does, it might hold off on introducing CPUs in these segments until it believes it can. Unlike 2005, when IBM couldn’t produce a G5 that fit into a laptop, Apple isn’t quite that pinched as far as market segments.

I think Apple’s CPUs have evolved enough to make a jump towards ARM and away from x86 plausible in a way it wasn’t back in 2014, but there are still some significant questions to be answered about where Apple would sell the part and whether it would attempt to replace x86 in all products, or in specific mobile SKUs. And, honestly, I think there’s a version of this story where Apple ultimately continues to work with Intel or AMD long into the future, having decidedly to deploy its own ARM IP strategically across the Mac line, or in secondary positions similar to how the T2 chip is used.

Now Read: