author: Eric Walz





MOUNTAIN VIEW, Calif., — DeepScale, a startup working on efficient deep learning perception software for use in mass-produced autonomous vehicles, announced a $15 million Series A funding investment led by Point72 and next47.

Additional Series A funding was provided by existing investor Autotech Ventures and Trucks Venture Capital , two firms known for their automotive expertise.

About DeepScale

DeepScale is a venture-backed Silicon Valley startup developing artificial intelligence perception software for driver-assistance and autonomous driving, with a focus on implementing efficient deep neural networks on automotive-grade processors.

DeepScale's deep neural nets (DNNs) use data from various sensors to help vehicles of all automated driving levels perceive the world around them. The perception technology developed by the company is to obtain raw data instead of object data, and use an embedded processor to accelerate sensor fusion.

DeepScale's unique solution is using deep neural networks (DNNs) on small, low-cost, automotive-grade sensors and processors to transform the accuracy of perception systems, which interpret and classify sensor data in real-time for automated vehicles. The company is working on bringing driver-assistance and autonomous driving to mass-produced vehicles at several different price points.

As traffic fatalities continue to claim more than a million lives each year worldwide, the recent Series A funding will enable the expansion of DeepScale's engineering team and technology advancements to support the company's mission to help make self-driving vehicles and roads much safer.

"One of our core objectives is to drastically reduce the number of deaths and injuries on the road," said Forrest Iandola, co-founder and CEO of DeepScale. "The company's Series A funding will not only empower our engineering team to continue to make breakthroughs in automated driving safety, but will also help us attract the brightest talent in the industry to transform the future of transportation."

The Current Status of Computer Vision in Cars

In a blog post, DeepScale co-founder and CEO Forrest Iandola explained the current status of computer vision for self-driving cars. He wrote that computer vision in autonomous cars can be grouped into general two categories.

The first type of systems are camera-only or camera combined with radar, which are already widely used in ADAS (Advanced Driver Assistance Systems). ADAS features include lane keep assist, adaptive cruise control, and automated emergency braking.

The second type are efforts by Google's Waymo and other major companies focused solely on autonomous driving. For these companies, the goal is to create highly autonomous vehicles without immediate regard to cost of sensors and computing hardware.

One such company Mobileye, has emerged as a dominant player in these efforts. Mobileye supplies ADAS systems to Ford, General Motors and BMW.

Mobileye first started working on computer vision technology nearly 20 years ago. At the time, computer processing power was much less than it is today and computer vision was very limited in what it could do, as the processors were much slower. This required companies to develop a custom chip to run computer vision in real-time.

Today, Mobileye sells forward-facing cameras bundled with a custom processor and proprietary computer vision software. These cameras are generally known to cost under $100 in mass-production volumes.

Deep Neural Networks

These larger scale autonomous projects have predominantly turned to deep neural networks (DNNs) as their tool for computer vision. DNNs (and, more importantly, the accuracy that they provide) have been a key catalyst for rapid progress in the development of autonomous vehicles in recent years.

However, these hardware solutions are expensive and cost prohibitive for many companies. Today's prototype autonomous vehicles often rely on DNNs running on as much as 2 kilowatts of GPUs or tensor processing hardware to interpret and fuse data from a vehicle's sensor suite. This setup often costs tens of thousands of dollars. DeepScale solution is a lower cost DNN, so developers can use the same robust technology without being prohibited by the higher price tag.

DeepScale co-founder and CEO Forrest Iandola attained a doctorate at UC Berkeley working on deep neural networks and computer vision systems. While working with his faculty advisor, now co-founder Kurt Keutzer, Forrest's advances in scalable training and efficient implementations of deep neural networks led to the founding of DeepScale.

"We've been following Forrest Iandola's research on efficient deep learning for a number of years," said Sri Chandrasekar, Director at Point72. "Forrest's inventions, such as a small DNN called SqueezeNet, have already been a game-changer for putting deep learning onto smartphones. When we heard that Forrest had started a company to put small DNNs into mass-produced cars, we jumped at the opportunity to get involved."

DeepScale & DNNs

Recent research on deep neural networks (DNNs) has focused primarily on achieving a high- level of accuracy. However, for a given accuracy level, it is often possible to identify multiple DNN architectures which are smaller, but still achieve the accuracy level desired.

With equivalent accuracy, using smaller DNN architectures offer two main advantages. First, smaller DNNs require less communication across servers during distributed training.

Luckily, training today using deep-learning is not restricted to a single machine. A significant amount of work and research has been conducted on enabling the efficient distributed training of neural networks. More importantly for autonomous driving applications, smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car via over-the-air updates.

Modern neural network architectures trained on large data sets can obtain impressive performance across a wide variety of domains, from speech and image recognition. However, training these neural network models is computationally demanding. In addition, Network training can take an impractically long time on a single machine.

To provide all of these advantages, DeepScale developed a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, DeepScale is able to compress SqueezeNet to less than 0.5MB of space (510x smaller than AlexNet).

DeepScale continues to push the boundaries on accuracy and robustness of DNNs for computer vision. The company is working on ways to get these approaches to run on hardware that is inexpensive (closer to $10 than $10,000) and also low-powered (closer to 10 Watts than 2 kW).

Using deep neural networks, a self-drving car can learn to identify objects

Another challenge is how to allow these approaches to plug into a wide variety of automotive sensor and compute platforms, rather than constraining OEMs to a proprietary set of sensors and processors.

The key challenge at DeepScale is working to take the best from modern computer vision approaches coming out of the autonomous world and affordably bring them to mass-produced automobiles.

T.J. Rylander, Partner at next47, said, "DeepScale is bringing unique expertise and advancements in deep neural network design to the automotive industry. We're very excited by the potential of autonomous technology to transform transportation markets. The DeepScale team is accelerating commercialization of today's driver assistance systems and tomorrow's self-driving vehicles, with the opportunity to bring profound impact to other transportation verticals longer term."

DeepScale has a number of strategic partnerships with Tier 1 suppliers, OEMs and semiconductor suppliers to provide automated driving perception solutions, including Visteon and HELLA-Aglaia Mobile Vision GmbH, a major German automotive supplier.

"Machine learning solutions will be the key driver for autonomous driving," said Kay Talmi, Managing Director of HELLA-Aglaia Mobile Vision GmbH. "DeepScale's core know-how in efficient deep learning networks is a perfect fit for HELLA-Aglaia's automotive applications and target markets."

DeepScale's perception software will also be available for prototyping in the second half of 2018.



