Artificial intelligence, or AI is really heating up these days. The technology has been around for decades, but of late it is becoming quite a focus for applications such as data center analytics, autonomous vehicles and augmented reality. Why the rebirth? The trend appears to be driven by two forces – availability of data to train these systems and new technology that dramatically speeds up the training process. Let’s take a look at both of these trends.

Regarding the data, this is really the currency of AI. Without massive amounts of known results, inference and machine learning isn’t possible. Thanks to the huge, global footprint and ubiquitous nature of a few key players, data stores are being built every day. Google has amassed a huge amount of empirical data associated with autonomous vehicle behavior. So has Tesla, Detroit and every other car manufacturer for that matter. Audi appears to pulling ahead with the planned introduction of Level 3 capability (“no feet, hands or eyes”) for their flagship A8. Look here to learn more about this development.

Natural language processing is another frontier. Think about all the gadgets in your house that are listening to you and ready to interact (e.g., Amazon Alexa, Samsung TVs and the like). This is not really a plot to eavesdrop on you. It does look like a carefully engineered program to learn how to interpret human speech however. There are many more examples of massive data collection from the likes of Google, Facebook, Amazon and Microsoft.

Looking at AI from the technology perspective, Nvidia’s GPU technology took an early lead as an architecture that could be adapted from graphics acceleration to AI training. The field is now expanding from this training phase and new architectures that can execute AI systems at much faster speeds are being developed. Companies like Nvidia, Qualcomm, Intel, IBM, Google and Facebook as well as others are jumping in.

These devices aren’t really chips, but rather systems in a package. They typically contain a massive processing ASIC (or two) built in the latest semiconductor technology (think 16nm and below) along with massive amounts of ultra-high bandwidth memory (think HBM2 stacks) all integrated on some kind of interposer (think silicon). We know who needs these chips, but who is designing and building them?

From a foundry perspective, this is the big leagues. TSMC, Samsung and GLOBALFOUNDRIES are players. Not a long list – this is hard stuff. These are ASICs, so who is sourcing the design? You need to look at who is really good at 2.5D integration and who owns the critical enabling IP for these designs (think the HBM2 physical interface and high-speed SerDes). The HBM2 PHY and high-speed SerDes blocks implement the mission critical communications between the various parts of these systems. They both represent very demanding analog-ish design challenges and sourcing them from the ASIC vendor is a very good idea to keep risks to a minimum.

The list of ASIC suppliers that possess all these pieces isn’t very long. Since this market will likely see explosive growth, this is a good list to be on. There’s one ASIC vendor in particular that will be interesting to watch – eSilicon. Regarding the required technology trifecta, they’ve been doing 2.5D integration since 2011 and are regarded as a leader in this space (check). They’ve also introduced a silicon-proven HBM2 PHY and SemiWiki covered this in a recent post (check). But what about the SerDes? Up to now, eSilicon has been integrating third-party SerDes blocks. If you look closely, this may be changing however. The company has made no formal announcements about ownership of SerDes technology, but you can find mention of a High Performance SerDes Development Center on their website. And they’re hiring layout engineers as well which is a big tell.

Bottom line: I’d keep an eye on eSilicon – the short list of ASIC players for the AI market is about to get a little longer, absolutely.