Welcome to Ars UNITE, our week-long virtual conference on the ways that innovation brings unusual pairings together. Today, a look at the slow roll to autonomous cars. Join us this afternoon (3pm ET) for a live discussion on the topic with article author Jonathan Gitlin and his expert guests; your comments and questions are welcome.

Self-driving AI cars have been a staple in popular culture for some time—any child of the 1980s will fondly remember both the Autobots and Knight Rider’s KITT—but consider them to be science fiction no longer. Within the next five years, you’ll be able to buy a car that can drive itself (and you) down the highway, although transforming into a Decepticon-battling robot or crime-fighter may take a while longer. As one might expect, the journey to fully automated self-driving cars will be one of degrees.

Here in the US, the National Highway Traffic Safety Administration (NHTSA) has created five categories of autonomous cars. The most basic of these are level zero, which might include your vehicle if it doesn’t have a system like electronic stability control. Fully autonomous cars, which can complete their journeys with no human control beyond choosing the destination, are categorized as level four. While level fours are still some way off, level three autonomous cars, which will be able to self-drive under certain conditions (say, an HOV lane during rush hour), are much closer than one might think.

A couple of weeks ago, Tesla wooed its fan base with the news that soon, its cars will be able to drive themselves. But the autonomous car may be one of the company's least innovative moves yet. Those who’ve been watching the industry closely will know that Mercedes, Volvo, Audi, and others have similar products waiting in the wings, ready to hit the streets as soon as the rules and regulations fall into place.

First steps

It all used to be so simple. A car was just a car; a mechanical contraption with an engine and wheels, controlled by a human being with a combination of pedals, levers, and wheel. Vehicle-to-vehicle (V2V) communication meant using turn signals or perhaps gesticulating rudely out the window to indicate displeasure at being cut off in traffic. However as semiconductors became cheaper, faster, and more rugged, they attracted the attention of the auto industry. Electronics began to infiltrate our cars, with fuel injection replacing carburetors in the name of performance and efficiency, for example, and anti-lock brakes (ABS) being added for safety.

By 1995, electronic stability control (ESC) systems started to appear, Mercedes-Benz leading the way with its flagship S-Class. Cars equipped with ESC are constantly monitoring their driver’s steering inputs and comparing them to the direction the vehicle is headed. If those two variables start to diverge beyond certain limits (because the car is either under- or oversteering), ESC will apply the brakes to individual wheels to bring things back under control. Stability control systems proved so effective at reducing both crashes and injuries that they became mandatory for any car sold in the US or EU by the end of 2011.

Eyes and ears

The mandate in effect made ABS and traction control standard features, too. So any car one might buy today will not only constantly be monitoring both its direction and where it’s heading, but also whether an individual wheel is spinning too much (because of a loss of grip) or even not at all (locked by a brake). These various safety aids aren’t sufficient for self-driving cars. They only take control during emergencies to slow a vehicle, but with the advent of drive-by-wire throttles and steering— something we explored recently —all that remains is for the vehicle to be able to ‘see’ the environment around it and have a ‘brain’ fast enough to make sense of that data to control where it goes. No biggie.

As it turns out, most of the technology needed for a car to sense the world around it already exists. Adaptive cruise control—as fitted to the Audi A8, for example—uses a mix of optical, radar, and ultrasonic sensors that keep a car from veering out of its lane and, by constantly checking the range to other vehicles, from hitting any of them. Image recognition software will even detect speed limits on road signs and alert the driver. All of this would seem like science fiction even a decade ago, but it really is just the beginning. Quite soon, those sensors will do more than just tell your car what’s around it, thanks to what’s known as V2V.

Ars IT Editor Sean Gallagher went for a ride in a V2V-enabled Ford at CES.

The cloud

As Ars' Sean Gallagher found out early this year, V2V-enabled cars can communicate to each other, warning of upcoming road hazards. V2V is being built atop 802.11p, a Wi-Fi standard that uses 75 MHz of the spectrum centered on 5.9 GHz. 802.11p allows almost instant network connections and can broadcast messages without establishing a network connection first, both of which are extremely desirable when thinking about the safety aspects of V2V. After all, it’s no good telling another car about a road hazard if you need to spend precious seconds handshaking. V2V-enabled cars will be able to quite literally see around corners, since the technology doesn’t require line of sight.

But wait, there’s more, and it’s coming from the cloud. More and more cars are coming equipped with LTE data connections, mainly in response to consumer demand for streaming media services. Passenger entertainment may seem trivial to some, but persistent data connections also enable in-car navigation systems to get a lot smarter. I’m probably not alone, for example, in ditching either a standalone or built-in GPS unit in favor of a smartphone app like Google Maps or Waze. And if you’re like me, you probably did it for the same reason: the smartphone apps are able to provide layers of real-time data (like traffic) on top of the cartography. Data-enabled cars mean we can ditch the smartphone holders and go back to using that onboard navigation system. That navigation data will also allow the car to know where it is in the world and, to a certain extent, what it’s likely to encounter.

That kind of map data is sufficiently informative for human drivers to use while they navigate, but even combined with GPS it’s not going to be accurate enough for a self-driving car (civilian GPS accuracy only has a 95-percent confidence interval of 7.8 meters). No, that’s going to require an extremely high-resolution map, and that map will need to be accurate, which means constantly updating. Writing for Slate, Lee Gomes identified this as a problem for Google, but other companies, particularly Nokia, think they might have this one licked.

Nokia’s HERE platform begins by mapping streets in the conventional 21st century way—with a small fleet of sensor- and GPS-equipped mapping vehicles, which it uses to create an HD map that’s machine (but not human) readable. But in addition to providing location data to HERE-enabled cars, Nokia will leverage them to continually update that map in near-real time. Those same cars will send sensor data about the road—things like the position of road lane markers accurate to a few centimeters—resulting in an always up-to-date map.

Nokia also has other plans for using crowdsourced data to improve the self-driving car. We recently spoke with HERE's head of Automotive Cloud Services, Vladimir Boroditsky, who told Ars the company plans to use crowdsourced data from connected cars to create data sets of driving behavior that the company can use to train car software how to drive without terrifying or aggravating humans along for the ride. Compared to the alternative, it certainly sounds like an efficient solution.