Tesla announced yesterday (Sept. 11) that Autopilot, its self-driving feature, would soon be updated to a new version that uses a car’s radar system — in addition to its camera system — to make decisions while driving semi-autonomously.

But perhaps the most interesting part of Tesla’s announcement is how the thousands of vehicles that make up its fleet are learning how to be better self-driving cars together. One of Tesla’s key advantages over incumbent carmakers is how much data it’s collecting — and that it’s actually going to use it.

Here, from Tesla’s blog post, is a great example of why that matters and how it works:

When the car is approaching an overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath, this often looks like a collision course. The navigation data and height accuracy of the GPS are not enough to know whether the car will pass under the object or not. By the time the car is close and the road pitch changes, it is too late to brake. This is where fleet learning comes in handy. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist. When the data shows that false braking events would be rare, the car will begin mild braking using radar, even if the camera doesn't notice the object ahead. As the system confidence level rises, the braking force will gradually increase to full strength when it is approximately 99.99% certain of a collision. This may not always prevent a collision entirely, but the impact speed will be dramatically reduced to the point where there are unlikely to be serious injuries to the vehicle occupants.