Now, I don’t own a Tesla myself (yet), but I do love and take good care of my 2014 Honda Civic Coupe EX. I baby it. And a big part of babying it involves *not* driving behind trucks and trailers that kick up all kinds of debris — debris that can chip the paint on the hood.

Paint chips would be glaringly obvious on this rainbow colored Model X

Self-driving vehicles don’t know to avoid these kinds of vehicles and the subsequent damage they can cause, because as of yet, the algorithms that run them aren’t optimizing for “body damage avoidance”. We’re still in the early stages of self-driving cars, so most of the development effort has gone towards improving safety (as it should).

However, what if there was a “Paint Protect” mode in self-driving cars? This option would activate a series of weights, which would in turn affect the decision making of its Autopilot. It works like this…:

Paint Protect is a reinforcement learning algorithm (unsupervised) that learns how to avoid paint damage while driving. It accomplishes this with a a reward function for every 100 feet driven without being struck by detritus.

But how can you detect when detritus hits the car? Easy.

Car bodies are made of elastic materials, like plastics and metals (in Tesla’s case, aluminum). Every elastic object has a resonant frequency. This means that when a car hood or bumper is struck, it tends to make the same type of noise. This is what rapping all over my car’s hood sounds like:

Hear how similar each strike sounded? While I was hitting many different sections of the hood, the sounds were nevertheless very similar; with a peak frequency around 250Hz and a fatter band peaking around 70Hz. Why does the sound’s reproducibility matter? Because if striking a car’s hood makes a distinctive sound, we can train our algorithm to recognize said sound, and therefore infer when the car is being damaged (or at least when it’s being hit). For reference, this is exactly the principle that makes the technology of tuning forks so useful. They reliably ring out with a peak in a specific frequency (like 440 Hz for the note A).

Notice the frequency with the highest amplitude is ~250 Hz, with a wide band around -35 dB at 70 Hz. This frequency spread is like a fingerprint of how strikes against the car sound.

All the vehicle has to do is use a microphone to listen for a short, sharp noise with the sound fingerprint of its hood or front grill, and it can then infer when it is being hit. When a vehicle detects a strike with debris, it can inspect the other sensory data from the vehicle, such as the video feed, weather conditions, ambient noise, etc., and make associations. For example, it may be determined that on dry days, gravel roads have an increased likelihood of causing strikes against the hood.

Paint chips are bad news bears

The Autopilot network (or some third party equivalent) would then use these associations to affect the weights of Autopilot. For example, it may change how much space a self-driving vehicle leaves between it and the vehicle in front of it, or even which types of vehicles the Autopilot is willing to drive behind.

And since “Paint Protect” mode relies upon Autopilot and just affects its weights, it can be toggled on and off depending on the variables the vehicle’s owner wants to optimize for. For example, it is very energy-efficient to draft behind a large vehicle like a truck, and perhaps one could also toggle an “Efficiency” mode (one that maximizes the miles driven per kWh), which would encourage the Autopilot to draft behind trucks (at the expense of an increased likelihood of paint-chipping).

This type of min-maxing will be commonplace in the future, with people giving their Autopilot all sorts of demands — get them to their destination as fast as possible, take a scenic route without tolls, or go as far as possible without needing to recharge. Self-driving AI will be able to optimize itself for the desires of its passengers on-the-fly.