If you ever wondered how Google’s self-driving car can tell drivers apart from cyclists and other users of the road, the company’s latest report on the project should shed a bit of light on the topic.

It turns out that (as with many of the company’s products) machine learning algorithms figure heavily into the car’s detection technology. By “seeing” many examples of bicycles with its cameras and sensors, the car’s computer has effectively been taught what bicycles look like from every angle.

“Our software learns from the thousands of variations it has seen — from multicoloured frames, big wheels, bikes with car seats, tandem bikes, conference bikes, and unicycles,” Google said in its report, published Tuesday.

Haven’t heard of some of the bicycle types mentioned on that list? Well, chances are you’d still recognize them if you saw them on the road — and you’d instinctively avoid them. Machines are a little different; they have to be taught that a bicycle is an object that’s distinct from the rest of the environment. And that’s not always an easy thing; even when Google’s driverless car can see a cyclist, it doesn’t always know what to do with that information.

When drivers and cyclists mingle, what you often get is an unsafe situation where all the ordinary rules of the road seem to go out the window. But if self-driving cars learn to coexist safely with all kinds of uncertainty, our streets and cities could become a much safer place for people of all ages.

Read more about: