Computers are stupid compared to humans at learning and applying new concepts. But scientists say they have developed a method to teach computers to learn in a more human-like way.

That could lead to computers that are much better at speech recognition — especially recognizing uncommon words — or classifying objects and behaviour for businesses or the military.

The U.S. and Canadian researchers have developed a computer program that teaches a computer to learn to recognize handwritten characters such as letters of the alphabet after seeing just one example of each.

It has always been very difficult to build machines that required as little data as humans. – Ruslan Salakhutdinov, University of Toronto

That's something humans, even children, can easily do.

But computers "typically require hundreds or thousands of training examples," said Ruslan Salakhutdinov, assistant professor of computer science at the University of Toronto, who co-authored the new research.

"It has always been very difficult to build machines that required as little data as humans, especially when it comes down to learning a new concept that goes beyond simple recognition or classification tasks," he said at a news teleconference organized by Science, where the new research is published today.

"I believe that our computational model … takes a first step toward this one-shot learning ability."

The human-like learning wasn't limited to recognizing characters, either. The computer could also figure out how to write them using a series of pen strokes (created on a screen). Afterward, humans couldn't tell the difference between the computer's handwriting and human handwriting, the study showed.

Can you tell the difference between humans and machines? Humans and machines were given an image of a novel character (top) and asked to copy it. The nine-character grids in each pair that were generated by a machine are (by row) B, A; A, B; A, B. (Brenden Lake)

Salakhutdinov said the "human-level performance" on creative tasks like that — which computers are typically extremely bad at — was the most exciting aspect of the new study.

Most computer algorithms used in tasks like image recognition rely on a technique called deep machine learning or deep learning. The computers look at tens of thousands of examples, and try to find a pattern in the pixels that they can assign to a concept like a bus or a dog. Such algorithms typically can't use their knowledge to produce a completely new image of a bus or a dog.

In the new study, the researchers decided to take a completely different approach.

Inspired by studies on humans

They noticed that people asked to write an unfamiliar character typically all did it the same way, suggesting that people see the character as a sequence of pen strokes, said Brenden Lake, a Moore-Sloan data science fellow at New York University and the paper's lead author.

We think in some form, this corresponds to what the human mind does. – Joshua Tenenbaum, MIT

They decided to program a computer to learn new characters by generating a program to interpret and write that character as a series of pen strokes. The program would look at different possible ways to write the character — where to start, how many strokes to use, when to lift the pen — and decide which ones were more likely, based on its past experience and the way people write them. When asked to write the character, it did so a little differently each time, based on probabilities.

"We think in some form, this corresponds to what the human mind does," said Joshua Tenenbaum, a professor at the Massachusetts Institute of Technology's Center for Brains, Minds and Machines, who co-authored the paper.

The researchers think a similar approach could be used to teach computers speech recognition. Right now, computers rely on finding patterns in huge databases of people saying common words.

"But if you want a system that can learn new words very quickly for the first time that it's never heard before, we think you'd be best off using the approach we've been developing," Tenenbaum said.

He also acknowledged the research project was supported in part by funding from the U.S. military, which believes that it could potentially be used in the future to recognize objects such as drones and classify their behaviour.

Slow so far

The researchers emphasized, however, that the research is currently at an early stage, and they haven't yet figured out how to teach a computer to learn not just written characters, but other kinds of concepts, like gestures, objects or spoken words.

They also acknowledged that computer systems have been optimized for current methods of machine learning, and they're fast and effective when there's lots of data available for a computer to learn from.

With the new technique, a laptop computer can take several minutes to learn a single character, Lake said.

Tenenbaum expects that the technique can be optimized to work more quickly.

On the other hand, he doesn't think it will replace the current deep learning technique, but will more likely be used in hybrid systems that switch between the techniques depending on how much data is available.