Next month in San Francisco, Uber will stand trial in federal court for allegedly cheating in the race to commercialize self-driving cars. Google parent Alphabet accuses Uber of stealing designs for sensors called lidars that give a vehicle a 3-D view of its surroundings, an “unjust enrichment” it says will take $1.8 billion to heal. Meanwhile in Toronto, Uber has a growing artificial-intelligence lab led by a woman who’s spent years trying to make lidar technology less important.

Raquel Urtasun joined Uber to set up a new autonomous-vehicle research lab in May—almost three months after Alphabet filed suit. She still works one day a week in her old job as an associate professor at the University of Toronto. And she has long argued that that self-driving vehicles can’t reach the masses unless the industry weans itself off lidar.

Most autonomous vehicles in testing—including Uber’s—pack one or more lidar sensors. But each lidar device costs from several thousand, to several tens of thousands of dollars. Urtasun has shown that in some cases vehicles can obtain similar 3-D data about the world from ordinary cameras, which are much cheaper.

“If you want to build a reliable self-driving car right now we should be using all possible sensors,” Urtasun says. “Longer term the question is how can we build a fleet of self-driving cars that are not expensive.”

Even reducing the number, or quality, of lidar sensors a vehicle needs to drive safely could shift the economics of autonomous cars. It might also help a company with legal troubles that make developing in-house lidar technology difficult.

Urtasun showed off results of her efforts to have cameras substitute for lidar at a computer-vision conference in New York a few weeks after joining Uber. They were enabled by recent advances in algorithms that learn to process images. Videos showed 3-D views of streets in Karlsruhe, Germany, extracted from stereo images from ordinary cameras. Urtasun said the system could run in real time, and compete with lidar within 40 meters of the car. That's a shorter range than high-end lidar sensors, suggesting that cameras can't yet do everything lidar can.

Self-driving-car projects also use lidar to gather and update the high-resolution maps autonomous vehicles need to navigate. Urtasun calls the cost and time involved a “fundamental issue” preventing widespread use of self-driving cars. Developing more scalable approaches to mapping is now one strand of her research at Uber.

Urtasun’s previous work has shown that smart-camera software might help with the mapping problem, too. Her University of Toronto lab developed software that could generate maps of roads, parking lanes, sidewalks and other features from aerial and ground-level photos. Another project showed how cars might observe the position of the sun to determine their location without GPS. Eight of her grad students joined Uber with her; the group now numbers about 30, and is still hiring.

Urtasun’s prominence at Uber reflects a relatively new school of thought in the world of self-driving cars. The rush to commercialize the technology was catalyzed by a series of contests organized by the Pentagon in the mid-aughts. The community that formed was and still is dominated by roboticists, who tend to focus on developing reliable individual components and engineering them together, says Jianxiong Xiao, a professor at Stanford.

Xiao and Urtasun come from a different field, computer vision. Xiao argues that they bring with them a nimbler mindset, helped by big leaps since 2012 in the power of computers to understand images due to an AI technique called deep learning. Urtasun believes ideas from that world will be central to achieving the dreams of the field. Xiao is CEO of AutoX, a 40-person company that modifies cars to drive themselves, even in the dark or during rain, just by adding software and a few cameras.