The computers that control self-driving cars are gaining valuable knowledge about the real world in some surprising ways—including browsing online maps and playing video games.

Researchers at Princeton University recently developed a computer vision and mapping system that gathered useful information about the physical properties of roads by studying Google Street View and comparing the scenes to the information provided in open-source mapping data. This allowed it to, for example, learn where the edges of an intersection should be based on images captured by Google’s mapping cars.

In separate work revealed Wednesday, researchers at OpenAI, a nonprofit focused on fundamental AI research, created a way to have software agents learn driving strategy by experimenting in the video game Grand Theft Auto V, through a platform known as Universe. Some video games are now so visually realistic that they can let a computer vision learn about the real world (see “Self-Driving Cars Can Learn a Lot by Playing Grand Theft Auto”).

New approaches to training self-driving cars may help democratize the technology, and make it more reliable. Self-driving cars were everywhere at this year’s Consumer Electronics Show in Las Vegas, and the technology is front at center at the North American Auto Show in Detroit, which started this week. But not everyone has the resources of Ford, Google, or Uber, and automated vehicles still struggle in many situations (see “What to Know Before You Get in a Self-Driving Car”). So some researchers are coming up with creative ways to gather the data and train driving systems. There are even efforts to open source the technology required for automated driving.

The Princeton researchers mined Google Street View and OpenStreetMap for their data. Road features in Google Street View images are sometimes occluded by a vehicle, someone crossing the road, or something else, so the system had to learn to recognize, and then discard, such artifacts. The researchers tested their system on new images and found that it could discern road features fairly accurately. They say it could offer a way to bootstrap a self-driving system with some of the basic knowledge required to navigate ordinary roads. The researchers trained their system using 150,000 Street View panoramas.

Its accuracy should improve as the training data set grows, says Ari Seff, a graduate student at Princeton who developed the system with Jianxiong Xiao, a professor who recently left the university to found an automotive startup called AutoX.ai.

“The manual creation of high-definition 3-D maps for autonomous driving is tedious and expensive,” says John Leonard, a professor at MIT’s CSAIL, who specializes in mapping and automated driving. “If this process can be automated using deep networks that operate on large public databases, this would be a big win for self-driving technology.”

The approach also offers a way to train a system to recognize situations that a real self-driving car may only encounter rarely, like a very complex intersection. “These models could potentially be used as part of a backup system in autonomous vehicles, corroborating the information provided by pre-scanned 3-D maps. However, we have not yet tested this in a real vehicle,” Seff says.

The researchers also suggest that their system could provide a warning about road infrastructure—for example, if the system concludes that a street looks as if it is one-way when it isn’t, then the signage may need to be updated. The limitations are that the system cannot identify objects that aren’t identified in a map, like pedestrians or other vehicles, and it isn’t accurate enough to localize a car very precisely.

“Learning this from Google Street View is a good idea,” says Craig Quiter, an engineer at Otto, a company that makes self-driving trucks and which was acquired by Uber last year. “The outputs don't contain enough to drive a car with, but definitely are helpful along with other perception as input to a planner.”

Quiter developed the Grand Theft Auto V while working as a contractor for OpenAI last year. The game can train software to recognize elements of the real world.

“GTA V gives researchers access to a rich, diverse world for testing and developing AI,” Quiter writes in a blog post published by OpenAI. “Its island setting is almost one-fifth the size of Los Angeles, giving access to a broad range of scenarios to test systems. Add to that the 257 different vehicles, seven types of bicycles, and 14 weather types, and it’s possible to explore a huge number of permutations using a single simulator.”

Through Universe, an agent can also develop a driving strategy by experimenting within the game and refining its own behavior as it achieves certain goals, an approach known as reinforcement learning (see “A New Tool Lets AI Learn to Do Almost Anything on a Computer”).

Quiter added in an e-mail that by releasing the technology required for automated driving, researchers and companies can democratize it. “I think it just got way easier to test self-driving car AI,” he says.