The new approach, known as Bayesian Program Learning, or B.P.L., is different from current machine learning technologies known as deep neural networks.

Neural networks can be trained to recognize human speech, detect objects in images or identify kinds of behavior by being exposed to large sets of examples.

Although such networks are modeled after the behavior of biological neurons, they do not yet learn the way humans do — acquiring new concepts quickly. By contrast, the new software program described in the Science article is able to learn to recognize handwritten characters after “seeing” only a few or even a single example.

The researchers compared the capabilities of their Bayesian approach and other programming models using five separate learning tasks that involved a set of characters from a research data set known as Omniglot, which includes 1,623 handwritten character sets from 50 languages. Both images and pen strokes needed to create characters were captured.

“With all the progress in machine learning, it’s amazing what you can do with lots of data and faster computers,” said Joshua B. Tenenbaum, a professor of cognitive science and computation at M.I.T. and one of the authors of the Science paper. “But when you look at children, it’s amazing what they can learn from very little data. Some comes from prior knowledge and some is built into our brain.”