OpenAI topped Bert with its own AI tool dubbed GPT2 , which mimicked the writing style of humans with high accuracy. It used an astonishing 1.5 billion parameters to train its model. This doesn’t stop there — MegatronLM, the latest and largest model from Nvidia trains on 8.3 billion parameters(chart above). The trend is clear.

To address the issues of heavy resource utilization & climate toll, researchers have been actively pursuing an option where they can shrink these AI’s in size, thus not only reducing their resource-intensive nature but at the same time making them more efficient.

Two new research papers released recently, have come up with models that might be able to accomplish this. The first one is from researchers at Huawei Noah’s Ark Lab called TinyBERT (figure below). They presented a theoretical model, which is one-seventh the size of the original BERT model with 10x the speed. TinyBERT was as capable of the same language understanding as the original.

The second proposal came from the Google researchers themselves producing a tiny version of their AI predecessor by a factor of 60. However, this much tinnier version than Huawei’s had to sacrifice a little on the language understanding capability.

TinyBERT — Huawei Research Paper

Both papers work on a common compression technique known as knowledge distillation to build the smaller versions of the full-scale AI models. The “teacher” AI trains the shrunk “student” AI to produce the same result as the former given a set of inputs.

These tiny AI’s can eventually be used in consumer devices like smartphones for natural language processing in digital assistants like Siri, Alexa & Google Assistant. Without the need to send consumer data to the cloud, this would eventually improve processing speed & privacy.

Just a quick word, before I let you go, about my most favorite area of AI application — in the healthcare industry. Chip giant Intel and Brown University have started work on a DARPA-backed Intelligent Spine Interface project that aims to uses AI technology to restore movement and bladder control for patients paralyzed by severe spinal cord injuries. The two-year project will be utilizing open-source AI software like nGraph & Intel AI accelerator hardware for the purpose.

How excited or worried are you about AI’s advancements?