It doesn't look like much. The brick office building sits next to a strip mall in Cupertino, California, about an hour south of San Francisco, and if you walk inside, you'll find a California state flag and a cardboard cutout of R2-D2 and plenty of Christmas decorations – even though we're well into April.

But there are big plans for this building. It's where Baidu – "the Google of China" – hopes to create the future.

In late January, word arrived that the Chinese search giant was setting up a research lab dedicated to "deep learning" – an emerging computer science field that seeks to mimic the human brain with hardware and software – and as it turns out, this lab includes an operation here in Silicon Valley, not far from Apple headquarters, in addition to a facility back in China. The company just hired its first researcher in Cupertino, with plans to bring in several more by the end of the year.

Baidu calls its lab The Institute of Deep Learning, or IDL. Much like Google and Apple and others, the company is exploring computer systems that can learn in much the same way people do. "We have a really big dream of using deep learning to simulate the functionality, the power, the intelligence of the human brain," says Kai Yu, who leads Baidu’s speech- and image-recognition search team and just recently made the trip to Cupertino to hire that first researcher. "We are making progress day by day."

If you want to compete with Google, it only makes sense to set up shop in Google's backyard. "In Silicon Valley, you have access to a huge talent pool of really, really top engineers and scientists, and Google is enjoying that kind of advantage," Yu says. Baidu first opened its Cupertino office about a year ago, bringing in various other employees before its big move into deep learning.

In the '90s and onto the 2000s, deep learning research was at a low ebb. The artificial intelligence community moved toward systems that solved problems by crunching massive amounts of data, rather than trying to build "neural networks" that mimicked the subtler aspects of the human brain. Google's search engine was a prime example of system that took a short-cut around deep learning, and the American search giant is using a similar approach with its self-driving cars. But now, deep learning research is coming back into favor, and Google is among those driving the field forward.

Google recently hired Geoffrey Hinton, the Godfather of deep learning, after some prodding from Stanford’s Andrew Ng, another power-player in the field, and many other companies are exploring the same area. IBM has long worked towards a computer model of the human brain. Apple now uses deep learning techniques in the iPhone's Siri voice recognition system. And Google has worked similar concepts into its own voice recognition system as well as Google Street View.

Kai Yu, speech recognition, image recognition-aided search, bai du, bai, du Photo: Alex Washburn / Wired Alex Washburn

Still, Baidu's decision to build an entire research lab dedicated to deep learning "is a bit of bold move," says New York University's Yann LeCun, a pioneer in the field, pointing out that the technology still has such a long way to go. But the IDL, he says, could be a way for Baidu to attract top talent and let creative engineers explore all sorts of blue-sky innovations – stuff akin to Google Glass and other project gestated at Google's secretive X Lab.

In fact, one of Yu's researchers is working on Baidu Eye, which many have called a Google Glass knock-off. But for now, Yu says, the lab's main priority is the exploration of deep learning algorithms. "We want to be focused," he says.

In November, Baidu released its first voice search service based on deep learning, and it claims the tool has reduced errors by about 30 percent. As Google and Apple have also seen, these improvements can change the way people interact with technology and how often they use it. When voice and image search services work like they're supposed to, we needn't fiddle with the teeny keyboards and small displays on mobile devices.

Today, web searches for products or services give you little more than long list of links, and "then it’s your job to read through all of those webpages to figure out what’s the meaning," Yu says. But he wants something that works very differently.

“We need to fundamentally change the architecture of the whole system," he explains. That means building algorithms that can identify images and understand natural language and then parse the relationships between all the stuff on the web and find exactly what you're looking for. It other words, it wants algorithms that work like people. Only faster.