Researchers say they’ve developed an algorithm that can teach a new concept to a computer using just one example, rather than the thousands of examples that are traditionally required for machine learning.

The algorithm takes advantage of a probabilistic approach the researchers call “Bayesian Program Learning,” or BPL. Essentially, the computer generates its own additional examples, and then determines which ones fit the pattern best.

The researchers behind BPL say they’re trying to reproduce the way humans catch on to a new task after seeing it done once – whether it’s a child recognizing a horse, or a mechanic replacing a head gasket.

“The gap between machine learning and human learning capacities remains vast,” said MIT’s Joshua Tenenbaum, one of the authors of a research paper published today in the journal Science. “We want to close that gap, and that’s the long-term goal.”

Tenenbaum and two colleagues – New York University’s Brenden Lake and the University of Toronto’s Ruslan Salakhutdinov – tested the algorithm by setting it to work on a database of 1,623 handwritten characters drawn from 50 writing systems, including Sanskrit and Tibetan.

The software broke each single example of a character down into sets of simpler strokes that could create the character, and then zeroed in on the sets that came closest to producing the right look. The BPL algorithm was also asked to come up with completely new characters, written in the same style as the examples.

To measure how well the computer did, the researchers set up what they called “visual Turing tests.” The laid out characters drawn by an assortment of humans alongside an equal number of characters drawn by the computer, and then asked human judges to pick which was which.

During each round of testing, no more than 25 percent of the judges performed significantly better than chance when it came to correctly identifying the human-written vs. machine-written characters.

The researchers concluded that the BPL approach “can perform one-shot learning in classification tasks at human-level accuracy and fool most judges in visual Turing tests of its more creative abilities.” But they also acknowledged the limitations of their experiment: Classifying the characters was a relatively simple task, and yet it sometimes took the computer several minutes to run the algorithm, Lake said.

Once the algorithm is refined, it could be built into the speech recognition systems for next-generation smartphones, Tenenbaum told GeekWire. “If you want a system that can learn new words for the first time very quickly … we think you will be best off using the kind of approach we have been developing here,” he said.