David Patterson, former professor at UC Berkeley

A year ago the University of California at Berkeley hosted a retirement celebration for David Patterson, who was hanging it up after a 40-year academic career in computer architecture. Patterson encored the event last May with a personal 16-minute history, chronicling his days as a wrestler in high school and college and a math major at UCLA, followed by a job at Hughes Aircraft and four decades at Berkeley. From writing two books with Stanford University's John Hennessy to chairing the Computing Research Association, Patterson told the audience that a key to his success was doing "one big thing at a time." His next big thing could be enormous. Rather than hitting the beach after retirement, Patterson joined Google in July to work on an ambitious new chip that's designed to run at least 10 times faster than today's processors and is sophisticated enough to handle the intensive computations required for artificial intelligence. It's called the Tensor Processing Unit (TPU), and Patterson has emerged as one of the principal evangelists. He spoke to about 100 students and faculty members at the Berkeley campus on Wednesday, a few days shy of the anniversary of his retirement celebration. "Four years ago they had this worry and it went to the top of the corporation," said Patterson, 69, while sporting a T-shirt for Google Brain, the company's research group. The fear was that if every Android user had three minutes of conversation translated a day using Google's machine learning technology, "we'd have to double our data centers," he said.

Google’s tensor processing unit or TPU. Source: Google

Google parent Alphabet already spends $10 billion a year on capital expenses, largely tied to data center costs. And now it's addressing what it calls a "renaissance in machine learning." Deep neural networks, or computers that are modeled to learn and get smarter over time as data sets get bigger and more complicated, require big breakthroughs in hardware efficiency. Patterson, who gave the same talk at Stanford on Thursday, was among the lead authors on a report from Google last month on the TPU's performance. The report concluded that the TPU is running 15 to 30 times faster and 30 to 80 times more efficient than contemporary processors from Intel and Nvidia. The paper, written by 75 engineers, will be delivered next month at the International Symposium on Computer Architecture in Toronto. The report was Patterson's debut project at Google. Once a week he treks down to the Mountain View headquarters and twice a week he works at the company's office in San Francisco. He reports to Jeff Dean, an 18-year Google veteran and head of Google Brain. It's not Patterson's first Google gig — he worked there while on academic sabbatical from 2013 to 2014. This time, he joined the TPU project as a distinguished engineer, a year after the chips were first put to use in Google's data centers and just two months after his retirement party.

Not really a retirement

Patterson has never been one to sit idle. In 2013, while still teaching, he participated in a powerlifting competition — and set a new California state record for his age group. "Now that I think back, there was no evidence for the assumption that he was retiring except that it was called a retirement celebration," said Mark Hill, a PhD student of Patterson's in 1987, and one of the speakers at his party. Hill, who now chairs the computer sciences department at the University of Wisconsin in Madison, said that in computer architecture Patterson is "on the short list of the great ones of the last half of the 20th century." He called the computer architecture book that Patterson wrote with Hennessy the field's most influential textbook of the last 25 years. Google says the TPU is being tested broadly across the company. It's used for every search query as well as for improving maps and navigation, and it was the technology used to power DeepMind's AlphaGo victory over Go legend Lee Sedol last year in Seoul.