Everything comes with a price, and artificial intelligence is no exception. The last decade has witnessed AI breakthroughs in object recognition, game playing, machine translation and many other areas. But these massive improvements required massive amounts of compute. For example, 2017 deep learning model AlphaZero consumed 300,000 times more computational power during training than 2012’s revolutionary AlexNet.

However, with global concerns growing regarding climate change and other environmental threats, recent mainstream media stories have pointed fingers at the massive carbon footprint left by the training of today’s resource-hungry machine learning models. In response, researchers at the Seattle Allen Institute for Artificial Intelligence (AI2) have proposed a Green AI initiative designed to make future AI research more energy efficient.

The AI2 researchers define Green AI as research that improves results without ballooning computational costs (and even better if compute can be reduced). They encourage their peers in AI research to report the financial cost of developing, training, and running models, and want to use this “price tag” data to establish baselines and promote the development of more efficient AI methods.

The emerging Green concerns have provoked some introspection in a global AI research community that has thus far been driven almost exclusively by the pursuit of state-of-the-art (SOTA) results — not model training cost or efficiency.

The AI2 researchers classify models as either environmentally friendly “Green AI” or as the (dominant) “Red AI” research, which they contend is “essentially buying” SOTA results through the use of huge compute resources. The distinction is determined based on the formula Cost(R) ∝ E · D · H, where “the cost of an AI (R)esult grows linearly with the cost of processing a single (E)xample, the size of the training (D)ataset and the number of (H)yperparameter experiments.”

The researchers propose adding efficiency to accuracy as a model evaluation metric, and suggest leveraging the floating point operations (FPO) that a model uses to achieve a result to rate efficiency. FPO has already been used to quantify energy footprints as it it provides an estimate on the total amount of work carried out by a computational process.

The AI2 researchers are not calling for an end to Red AI research, which they acknowledge has yielded valuable contributions “by studying the upper-bound current techniques can achieve with maximal compute.” Rather they suggest that along with revving up their massive AI engines the community should also pay attention to the greater efficiency of Green AI. They believe such an approach would also enable even undergraduates and independents to conduct research on their laptops.

The paper Green AI is on arXiv.