Google has designed its own computer chip for driving deep neural networks, an AI technology that is reinventing the way Internet services operate.

This morning at Google I/O, the centerpiece of the company's year, CEO Sundar Pichai said that Google has designed an ASIC, or application-specific integrated circuit, that's specific to deep neural nets. These are networks of hardware and software that can learn specific tasks by analyzing vast amounts of data. Google uses neural nets to identify objects and faces in photos, recognize the commands you speak into Android phones, or translate text from one language to another. This technology is even transforming the Google search engine.

Big Brains

Google calls its chip the Tensor Processing Unit, or TPU, because it underpins TensorFlow, the software engine that drives its deep learning services.

This past fall, Google released TensorFlow under an open-source license, which means anyone outside the company can use and modify it. Google has not indicated it will share the designs for the TPU, but outsiders can make use of Google's machine learning hardware and software via various Google cloud services.

Google says it has been running TPUs for about a year, and that they were developed not long before that.

Google is just one of many companies incorporating deep learning into a wide range of Internet services. Facebook, Microsoft, and Twitter are also taking part in this AI-driven transformation. Typically, these Internet giants drive their neural nets with chips called graphics processing units, or GPUs, made by companies like Nvidia. But some, including Microsoft, are also exploring the use of chips called field programmable gate arrays, or FPGAs, which can be programmed for specific tasks.

Google

A TPU board fits into the same slot as a hard drive on the massive hardware racks inside the data centers that power Google's online services, the company says, adding that its own chips provide "an order of magnitude better-optimized performance per watt for machine learning" than other hardware options.

"TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation," the company says in a blog post. "Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly."

Google says it has been running TPUs for about a year, and that they were developed not long before that. After testing its first silicon, the company says, it had its chips running live applications inside its data centers in just over three weeks.

Building its own chips means Google isn't using ones from traditional chipmakers—or at least using fewer. That's especially bad news for Intel, whose processors power a vast majority of the computer servers inside Google. The worry for Intel is that, in showing a willingness to build its own chips for deep learning, Google may expand the effort to design its own central processing units, the heart of any computer.