You’re on the highway, but your hands aren’t on the steering wheel. The adaptive cruise control, bouncing radar signals off the vehicle ahead, is keeping a perfect pace. The blind spot system keeps you informed, with more cameras and radar, if a car is approaching on either side, and the lane keeping system nudges your wheels back in line. You were too busy adjusting the stereo, but you have been driving safely.

Semi-autonomous controls are already possible on many new cars, and amazingly, they do work. The science is not yet ready to let you close your eyes and recline at 70 mph—should you try, all sorts of bells and lights will illuminate, and then you will most definitely crash. The self-driving car is not here, not until it starts to think and act like a real person.


To get there, automakers like Ford need lots of smart people to write algorithms that can make these calculations for us. And lo and behold, that’s what MIT is doing—yet again.

MIT has been working with Ford since 1998 on everything from voice recognition to how older drivers interact with modern technology. Now, one month after Ford debuted its first autonomous research vehicle, MIT (and Stanford, too) will help the company predict what people in other cars are about to do.

“How do humans do it?’’ says Greg Stevens, Ford’s project director. “How do we make computers do the same things? We’re making models.’’

At the Washington Auto Show last month, Stevens explained what those spinning caps on the roof of his Fusion Hybrid can do. They’re 360-degree LIDAR scanners, short for Light Detection and Ranging. Like radar, which relies on bouncing radio frequencies off physical objects, LIDAR uses low-energy laser beams. The car’s LIDAR scanners can build a three-dimensional image of the car’s immediate surroundings, which can let it judge distances, other objects, and the road itself.

MIT is helping Ford program the software that analyzes what the scanners see. Aeronautics and astronautics professor Jonathan How has a lot of experience with autonomous cars, having led an MIT team to finish the US military’s grueling DARPA (Defense Advanced Research Projects Agency) competition in 2007. MIT’s autonomous Land Rover was one of only six cars to finish the entire 60-mile course, which was designed to test how autonomous vehicles might survive in combat.


How and his team of students are trying to build formulas to recognize motion cues, such as another car drifting out of lane or braking hard, and to ultimately determine driver goals. Will the guy in front of me turn off that exit or is he just passing on the right? Is she tailgating and obviously going to come over without signaling? MIT wants to make a line of code for every possible situation.

“Once we have a complete set of sensors on the vehicle, then we can start those scenarios,’’ says Stevens. “It’s a process of elimination.’’

He’s referring to all the myriad ways that automakers are trying to determine threats on the road. Subaru, for example, uses two color cameras to perform all tasks, such as auto-braking and collision alerts, while Volvo uses a combination of cameras and LIDAR. And software—the way these cars interpret what may or may not be happening—varies incredibly across every manufacturer.

Getting back to those four spinning tops on the roof: They’re awkward, and Ford knows it. They used to be one foot tall and weigh 29 pounds apiece; now they weigh only four pounds and are four inches tall. Soon, they’ll be half that size, at which point Ford may be able to hide them within the car. Judging from the stacks of metal boxes and wires stuffed in the car’s trunk—and the resulting heat from all the electronics—that won’t happen overnight.

Now that Michigan has become the fourth state to allow autonomous test cars, Ford will be able to try MIT’s experiments in real time. But until more data is collected—one MIT professor has attached a dash cam to his morning commute for the better part of a year—no one will know how to make self-driving cars work without failure.


Stevens sums it up quickly: “Right now, it’s a lot of people’s best guesses.’’