As I have mentioned before, Intel and the foundries approach process development from different starting points. Intel is committed to Moore’s law in reducing the transistor cost by increasing the process density in a near linear fashion. The foundries on the other hand work closely with partners and customers to determine the power, performance, and area (PPA) goals of the next process node within a specific time to market (TTM). As we all know, Apple has a very specific TTM (iTTM) which will always be the priority.

14/16nm SoCs are already in production at Intel, Samsung, GlobalFoundries, and TSMC with products due out in the second half of 2015. This will be the first time we really get an Apple-to-Apple, IDM vs Foundry comparison with the Intel Cherry Trail and Apple A9 SoCs and I’m truly excited to see the first tear down. Considering the Apple A8 had 2B+ transistors on a 89mm2 and 8.47 X 10.5mm die, one can only imagine how many transistors the 14nm SoCs will have.

Now that 14/16nm is in production we are looking to 10nm for our next cost reduction. I really am glad we are all calling it 10nm but as you know not all 10nm processes are created equal (Who Will Lead at 10nm?). The 10nm process design kits (PDKs) are just now hitting the streets so the design challenges have just begun. The foundries are targeting the end of 2015 for the first customer tape outs which generally means production one year later. My guess is that you will see products with 10nm silicon in the second half of 2017 which means we will again be on 14/16nm for 2016. Improved versions of course, maybe 16nm FF++++ or 14nm UUULP?

An Intel Executive recently predicted 10nm would be available in 2017 in a candid interview on GulfNews.com out of Dubai of all places:

“We have been consistently pursuing Moore’s Law and this has been the core of our innovation for the last 40 years. The 10nm chips are expected to be launched early 2017,” said Taha Khalifa, general manager for Intel in the Middle East and North Africa region.

Mr. Khalifa is a 24 year Intel veteran so he should certainly know. Intel has a famous tick-tock model where they follow every architecture change with a die shrink. A tick is a die shrink and a tock is a new architecture. Broadwell was a 14nm tick, Skylake will be a 14nm tock, and Cannonlake will be a 10nm tick.

Back in the day, we used to judge microprocessors by the clock speed (megahertz), it was a badge of honor really. I remember buying a PC with a 40MHZ AMD CPU for more money than one with an Intel 33MHZ CPU. I even shamed my brother who had just bought a 33MHZ version. Computers were really like muscle cars for nerds back then. Recently an SOC friend of mine shamed me for commenting that the A8 ONLY ran at 1.4GHZ versus 2GHZ. What can I say, old habits die hard. With SoCs, the badge of honor is getting the best SYSTEM LEVEL performance, which now, thankfully, includes battery life.