Google is the first company to do serious testing on a consumer-oriented driverless car platform, but it’s still a long way from viability. Part of that is acceptance of the technology, but cost is at least as much of a problem. Google’s car uses a lot of very advanced hardware. Simply knowing where the roads are isn’t enough for a robotic car — it needs to be able to detect and avoid obstacles like other cars and the significantly less impact-resistant pedestrians and cyclists. The details of the detection system are, unsurprisingly, every bit as fascinating as you might imagine.

Google’s driverless car tech uses an array of detection technologies including sonar devices, stereo cameras, lasers, and radar. All these components have different ranges and fields of view, but each serves a particular purpose according to the patent filings Google has made on its driverless cars. Anyone who has ever seen an image of Google’s self-driving Prius has probably noticed one of these systems poking up above the vehicle — the LIDAR laser remote sensing system. According to Google engineers, this is at the heart of object detection.

The LIDAR system bolted to the top of Google’s self-driving car is crucially important for several reasons. First, it’s highly accurate up to a range of 100 meters. There are a few detection technologies on the car that work at greater distances, but not with the kind of accuracy you get from a laser. It simply bounces a beam off surfaces and measures the reflection to determine distance. The device used by Google — a Velodyne 64-beam laser — can also rotate 360-degrees and take up to 1.3 million readings per second, making it the most versatile sensor on the car. Mounting it on top of the car ensures its view isn’t obstructed.

Google mounts regular cameras around the exterior of the car in pairs with a small separation between them. The overlapping fields of view create a parallax not unlike your own eyes that allow the system to track an object’s distance in real time. As long as it has been spotted by more than one camera, the car knows where it is. These stereo cameras have a 50-degree field of view, but they’re only accurate up to about 30 meters.

Google’s LIDAR system is great for generating an accurate map of the car’s surroundings, but it’s not ideal for monitoring the speed of other cars in real time. That’s why the front and back bumper of the driverless car includes radar. This is one of the few technologies Google employs in its driverless car that you can already get in mainstream vehicles. Conventional vehicles use radar to warn you of an impending impact or even apply the brakes to prevent one, but the Google car uses radar to adjust the throttle and brakes continuously. It’s essentially adaptive cruise control that always takes into account the movement of cars around you.

The radar system is probably paired with sonar in at least some of Google’s test cars. While radar works up to 200 meters away, sonar is only good for 6 meters. They both have a narrow field of view, so the car knows things are about to get messy if another vehicle crosses the radar and sonar beams. This signal could be used to swerve, apply the brakes, or pre-tension the seatbelts.

Google’s software integrates all the data from these remote sensing systems (as much as 1GB per second) to build a map of the car’s position. Other cars are rendered as rough blocks with shifting, amorphous edges. It doesn’t have to be perfect — it’s not like the Google car needs to get close enough to test the accuracy of the borders. GPS is only accurate to within a few meters, so it’s remarkable the car can even stay in the right lane, let alone avoid collisions. Combine such an obstacle detection system with millions of miles driven in a Matrix-like simulation of California, and you have a very advanced autonomous driving system that’s getting more advanced all the time. That’s the power of data.