Google is investing heavily in cloud-computing services — services that help other businesses build and run software — which it expects to be one of its primary economic engines in the years to come. And after snapping up such a large portion of the world’s top A.I researchers, it has a means of jump-starting this engine.

Neural networks are rapidly accelerating the development of A.I. Rather than building an image-recognition service or a language translation app by hand, one line of code at a time, engineers can much more quickly build an algorithm that learns tasks on its own.

By analyzing the sounds in a vast collection of old technical support calls, for instance, a machine-learning algorithm can learn to recognize spoken words.

But building a neural network is not like building a website or some run-of-the-mill smartphone app. It requires significant math skills, extreme trial and error, and a fair amount of intuition. Jean-François Gagné, the chief executive of an independent machine-learning lab called Element AI, refers to the process as “a new kind of computer programming.”

In building a neural network, researchers run dozens or even hundreds of experiments across a vast network of machines, testing how well an algorithm can learn a task like recognizing an image or translating from one language to another. Then they adjust particular parts of the algorithm over and over again, until they settle on something that works. Some call it a “dark art,” just because researchers find it difficult to explain why they make particular adjustments.

But with AutoML, Google is trying to automate this process. It is building algorithms that analyze the development of other algorithms, learning which methods are successful and which are not. Eventually, they learn to build more effective machine learning. Google said AutoML could now build algorithms that, in some cases, identified objects in photos more accurately than services built solely by human experts.