4 min read

Last week, news broke that famous UK-based artificial intelligence research lab DeepMind was stacking up huge losses for its parent company Alphabet Inc. According to documents filed with the UK’s Companies House registry, DeepMind incurred $570 million in losses in 2018, up from $341 million in 2017.

DeepMind is the AI outfit that is behind some of the most remarkable feats in the past years, including the AI that beat the human champion at Go developing a deep learning model that beat human champions at StarCraft 2. Alphabet, which also owns tech giant Google, acquired DeepMind for $650 million in 2014. Since then, it has been pouring money into the AI research lab without significant returns. DeepMind has 1.04 billion pounds in debts due this year, which includes an 883 million-pound loan from Alphabet.

DeepMind’s huge costs bring to light some of the most serious challenges the AI industry is grappling with. Here are some of the key takeaways.

AI talent scarcity is concentrating research in a powerful few

According to the released information, DeepMind paid $483 million to approx. 700 employees, which means an average of around $700,000 per employee. Of course, the pay is not evenly distributed and some of DeepMind’s AI engineers earn seven-digit salaries.

Currently, the AI talent that can lead the kind of innovative projects research labs like DeepMind work on is very scarce. This has created a race between tech giants to offer bigger salaries to AI engineers in hopes of attracting them to their research teams. Paying more than $1 million to AI researchers has become common in large tech companies like Google and well-funded AI research labs such as OpenAI.

The stellar costs of hiring AI researchers is problematic in several ways. The AI arms race between the Big Tech is making it harder for smaller companies and organizations to contribute their share to AI research. After all, not every company can afford to pay its AI researchers seven-digit salaries.

But perhaps the more damaging effect is the brain drain in academic AI. The growing interest and the deep pockets of large tech companies is attracting AI talent toward commercial entities. Universities are finding it harder and harder to hold on to their AI researchers as they can’t match the lucrative incentives Big Tech offers.

There are a few AI researchers who prefer to spend their time in less paid academic projects, but their numbers are shrinking.

With AI talent being concentrated in a few powerful organizations, AI research and innovation can become focused on serving the interests of those companies and less the public good. In some cases, commercial and public interests are aligned, but that is not the rule. The disastrous state of social media and addictive tech shows what happens when tech companies decide to give priority to their own bottom line. The impact of Big Tech monopolizing the AI industry can be even more severe.

In my experience examining dozens of commercial and academic AI projects and speaking to their engineers and executives, there needs to be a balance between the two.

Academic projects provide infrastructural, open-source, general-purpose AI tools that are publicly accessible and can solve the problems of all sorts of organizations and achieve long-term goals. They solve fundamental problems but are not ready to be used out of the box. They usually need to be integrated into other products and software and require technical expertise to be finetuned for specific purposes.

Commercial AI projects, on the other hand, provide end-to-end, ready-to-use solutions that organizations and individuals can purchase and immediately employ to solve problems. They’re easy to use and accessible to people and organizations that don’t have AI expertise. But often, they’re not open to modifications and are hidden behind the walled garden of the commercial entity that develops them. The developers usually don’t share details on how the AI technology works and consider it IP and business secrets. Some entities take ownership of the data you generate when you use their AI system, and the service comes at hefty costs (they do have to pay those expensive AI researchers, after all).

Usually running on government grants, academic AI research is not constrained by return on investment and can run long-term projects without worrying about revenue. But commercial AI is constantly under the pressure of investors who want to see return on investments. That’s why they aim for goals that can be achieved in the short term.

With big tech companies recruiting more and more AI researchers into their ranks, there’s concern that there will be too much commercial AI and too little academic work.

Fortunately, there are some initiatives that might help bridge this gap, such as the MIT-IBM Watson AI Lab, which brings together the resources and talents of commercial and academic AI to develop projects that can benefit everyone, such as a technique that makes AI models more robust against adversarial attacks and another that helps understand the inner-workings of neural networks.

Other developments that might help alleviate the gap created by the cost of AI talent are the many online education programs such as Fast.ai, a free course that teaches deep learning to anyone who has basic coding skills and decent understanding of high-school math. These courses will help expand the pool of AI talent and make it more affordable and accessible to organizations that don’t have the resources and money of Big Tech.

Operating costs pose limits on AI research

Another important factor DeepMind’s losses highlight is operating and infrastructure costs. The general belief is that because of the nature of artificial neural networks, the current focus of the AI industry, developing deep learning models requires vast amounts of data and compute resources.

As AI researcher Jeremy Howard explains, however, this does not necessarily hold true. There are plenty of scenarios and use cases where you can develop deep learning models with minimal training data and by spending a few bucks to rent GPUs in the cloud.

There are also plenty of pretrained neural networks that can be finetuned for new purposes with minimal efforts and resources through transfer learning.

But many AI research projects that require reinforcement learning are still very resource intensive. Reinforcement learning is a training technique in which the AI model is given the basic rules and reward functions for a problem and is left on its own to explore the environment and find solutions. Reinforcement learning is used for domains such as robotics and teaching AI bots to play games.

For instance, according to figures released by DeepMind, its StarCraft-playing AI model consisted of 18 agents. Each AI agent was trained with 16 Google TPUs v3 for 14 days. This means that at current pricing rates ($8.00 / TPU hour), the company spent $774,000 for the 18 AI agents. Other reinforcement learning projects can have similar costs.

Overcoming this hurdle will probably be much harder than reducing the costs of AI talent. But there are already interesting efforts in the work. One possible solution is the development of hybrid AI systems that combine neural networks and rule-based programs. According to initial results, hybrid AI systems trained with reinforcement learning can achieve their goals with much less data and compute resources. These types of AI models might make it possible for more resource- and cash-constrained organizations to run their own research programs.

Whether any of these projects and efforts will help reduce the costs of AI remains to be seen. But DeepMind’s growing losses remind us of the current challenges of AI and the need to steer the industry in the right direction.