Known for its advanced telecommunication technology and cost-effective smartphones, Chinese tech giant Huawei is now aggressively expanding its artificial intelligence footprint. The company made a series of AI-related announcements today at the Huawei Connect 2018 Conference in Shanghai, introducing two AI chips and a machine learning framework. Huawei’s AI push is expected to intensify its battle with domestic rivals Alibaba, Tencent and Baidu in the AI market.

Blockbuster news had been expected after The Information revealed Huawei’s Project Da Vinci mandate “to bring AI to everything from telecom base stations to cloud data centers to smartphones to surveillance cameras.” A person familiar with the matter told Synced that the Da Vinci Project led by Huawei Deputy Chairman and President of Huawei chip affiliate HiSilicon Eric Zhijun Xu is a high priority project getting a huge amount of attention.

In today’s keynote address Xu stressed the importance of an AI development strategy: “Like electricity and railways during the industrial revolution, artificial intelligence is the new general purpose technology of the 21st century. Huawei’s AI strategy is to invest in basic research and talent development, build a full-stack, all-scenario AI portfolio, and foster an open global ecosystem.”

Huawei Deputy Chairman and President of Huawei chip affiliate HiSilicon Eric Zhijun Xu

AI chips: Ascend 910 & Ascend 310

The Internet giants’ race to create custom AI chips has so far produced Google’s powerful TPUs and Microsoft’s FPGAs. Earlier this year, Alibaba and Baidu announced their respective development plans for AI chips Ali-NPU and Kunlun.

Huawei jumped on the bandwagon today with the unveiling of Ascend 910 and Ascend 310, two 7nm-based AI chip IPs that run on the cloud for training and inferencing. Both chips are built on Huawei’s homegrown Da Vinci architecture, which features scalable memory, compute, and on-chip interconnection.

Billed as the single chip with the greatest computing density, Ascend 910 delivers performance of up to 256 teraFLOPS under FP16 and 512 teraOPS under IN8 with a maximum power consumption of 350W. In comparison, Nvidia’s most powerful GPU Tesla V100 delivers up to 125 teraFLOPS with a max power consumption of 300W, while Google’s TPU 2.0 with four ASICs can reach 180 teraFLOPS.

Ascend 910

Huawei also announced a large-scale distributed training system, Ascend Cluster, which combines 1024 Ascend 910 chips to reach 256 petaFLOPS for deep learning. Both the chips and the cluster will be available in Q2 2019. Chinese media is reporting that Huawei is touting its new cloud computing chips to Microsoft Azure China, although Huawei has officially denied this.

Ascend 310 is an efficient 12nm SoC (System on a Chip) designed for low-power computing, with a power consumption of 8W and a performance of 8 teraFLOPS under FP16 and 16 teraFLOPS under IN8.

Huawei Executive Ping Guo said the company is pumping more than US$1 billion annually into R&D for data centers.

ML framework MindSpore

Huawei also rolled out a set of open-source AI development tools on its cloud service platform which will help developers and engineers simplify AI workflow from training machine learning models to deployment on local devices. The tools will be available on Huawei’s AI service platform Cloud Enterprise Intelligence and its AI engine for smart devices HiAI.

Huawei’s new ML framework MindSpore provides device-edge-cloud training and inferencing based on a unified distributed architecture for machine learning, deep learning, and reinforcement learning, etc. It supports models trained on other frameworks such as TensorFlow and PyTorch, and provides flexible APIs decoupled from the core system.

Also announced today at Huawei Connect was Compute Architecture for Neural Networks (CANN), an operator library for chipsets. A standout component of CANN is its highly automated operators development toolkit Tensor Engine, which enables a DSL interface, auto optimization, auto generation, and auto tuning. CANN also includes TVM, an automated end-to-end optimizing compiler for deep learning. Huawei boasts that CANN can triple development efficiency.

CANN

ModelArts meanwhile is Huawei’s new machine learning platform-as-a-service, providing full-pipeline services, hierarchical APIs, and pre-integrated solutions.

A battle for the Chinese AI market

Huawei clearly wants to get a piece of China’s burgeoning AI market. A recent Tsinghua University report projects the Chinese AI market, which was worth US$3.55 billion in 2017 (up 67 percent from 2016), will grow by another 75 percent in 2018. Chinese society is quickly adapting to and embracing AI technologies, from facial recognition authentication for bank accounts, to smart speakers, home appliances and autonomous vehicles.

Chinese tech giants are ratcheting up their games, hoping to pull ahead of the competition in AI model and application development and deployment. Alibaba recently released new AI chips and updated its cloud-based enterprise and government AI-powered solutions. Tencent announced a new robotic research center and opened a new AI platform, AI.QQ.COM, aiming to build an ecosystem that unites the company’s diverse AI technical capabilities.

Huawei’s announcements emerge from the company’s strong focus on AI democratization, akin to efforts by US tech giants like Google and Microsoft. The company’s home-developed AI chips and open-source framework will help developers industry-wide create richer and more powerful AI applications. As China’s second-largest cloud vendor, Huawei is committed to attracting developers to its cloud platform by creating easy-to-use tools. The HiAI platform so far has a developer community of some 400,000.

Huawei’s comprehensive AI strategy marks a turning point for the company and the beginning of its AI transformation. With all leading global tech companies now betting heavily on AI, why not Huawei?