ARM has unveiled its next generation of processor designs, a new microarchitecture named Dynamiq. Chips built using Dynamiq will be easier to configure, says ARM, allowing manufacturers to connect together a wider variety of CPUs. This should allow for more powerful systems-on-chip, but also processors that better serve computing tasks of the future from artificial intelligence to self-driving cars.

“It’s a step change in how we build CPUs and the way we stitch CPUs together,” ARM product marketing head John Ronco told The Verge. “It’ll be in smartphones and tablets, for sure, but also automotive networking and a whole range of other embedded devices. Anywhere a Cortex processor is used today, Dynamiq is going to be the next step forward.”

More flexibility in chip design means faster, cheaper devices

Dynamiq builds on ARM's existing "big.LITTLE" approach, which pairs a cluster of powerful, or “big,” processors, with a set of power-sipping "little" ones. Dynamiq adds more variety to this system, supporting core that aren’t just big or little, but anywhere in between, with the ability to connect up to eight different CPUS of any configuration — an approach to chip design known as heterogeneous computing.

One of the benefits of big.Little is that it adapts to users’ needs. When a big.Little-powered phone is on standby, for example, it can call on the smaller, power-efficient CPUs to keep things ticking over. But, when it needs to boot up an app or a game, it can spin up the beefier processors. Dynamiq will improve on this technique, says ARM, and make more flexible and powerful chipsets available at a lower price point. “What Dynamiq enables is putting big CPUs into mid-range devices,” says Ronco. “This has a real, clear benefit in terms of user experience.”

Ronco says that ARM has already licensed the new architecture to a number of customers, and expects to see the first Dynamiq-powered devices hitting the market in early 2018.

But, Dynamiq goes beyond offering just additional flexibility, and will also let chip makers optimize their silicon for tasks like machine learning. Companies will have the option of building AI accelerators directly into chips, helping systems manage data and memory more efficiently. These accelerators could mean that machine learning-powered software features (like Huawei’s latest OS, which studies the apps users use most and allocates processing power accordingly) could be implemented more efficiently.

Combining these accelerators with an expansive library of processor instructions (think of these as training manuals for CPUs that let them specialize in specific tasks like machine learning), ARM is claiming that Dynamiq will deliver a 50 times increase in “AI-related performance” over the next three to five years. This is a vague claim that could be measured in a number of different ways, but it’s clear that ARM — like other chip companies — is serious about supporting AI in more devices.

Jim McGregor, an analyst with Tirias Research, says ARM’s announcement today is a big indicator of the company’s expanding ambitions. “ARM is very serious about being more than just an intellectual property provider, and intends to be a force throughout the [processor] ecosystem,” McGregor tells The Verge. “It’s essentially looking to expand into other market segments through building a flexible architecture.”

As part of its presentation of Dynamiq, ARM stressed that the number of chips its customers are shipping is going to dramatically increase in the next few years. From 50 billion chips shipped by ARM customers between 2013 and 2017 to 100 billion shipped between now and 2021. The majority of these will be simpler ARM chips (low-power Cortex-R and Cortext-M designs, like those used in Fitbits), but the company expects its new microarchitecture to spread quickly through the market. “I would have thought within a couple of years pretty much all smartphones will be using Dynamiq,” says Ronco.