Although he insists he doesn’t harbor ill will toward those better-known researchers, it grates on him that history hasn’t been kinder. “Certain researchers in my field have acted as if they invented something, although it was invented by other people whom they did not even mention,” Dr. Schmidhuber said.

But understanding the disconnect between his early work and his lack of celebrity isn’t easy — and cannot be entirely explained by the fact that he lives thousands of miles from the tech industry’s center of gravity.

The dispute is about the roots of neural networks, which allow machines to learn by recognizing patterns that can then be applied generally. Applications include recognizing speech and language, visually identifying objects, navigating in self-driving cars and making robot hands grasp more deftly. As a scientific field, it dates to the 1940s. But only in recent years have researchers in this area made striking progress.

Neural networks are actually software. For a visual analogy, think of them as a giant Tinkertoy set — vast arrays of interconnected nodes that can be trained to do everything from language translation to recognizing visual objects or human speech.

For decades, neural networks were laboratory curiosities, often met with skepticism. But in the 1990s, with faster and cheaper computers as well as new ideas about how to design neural nets, there was finally progress.

In 1997, Dr. Schmidhuber and Sepp Hochreiter published a paper on a technique that has proved crucial in laying groundwork for the rapid progress that has been made recently in vision and speech. The idea, known as Long Short-Term Memory, or LSTM, was not widely understood when it was introduced. It essentially offered a form of memory or context to neural networks.

Just as humans do not restart learning from scratch every second, a certain type of neural network adds loops or memory that interpret each new word or observation in light of what has been previously observed. LSTM strikingly improved these networks, leading to huge jumps in accuracy.