Tesla CEO Elon Musk has always argued that data collection is his car company’s secret sauce, and he did it again on Wednesday, on a call with investors. Having a big fleet of Tesla vehicles on the road is nice, he said, “because it allows us to collect these corner cases and learn from them.” Every so often, Tesla will push out an over-the-air software update, suddenly giving its cars the power to, say, fart on command, or play Netflix movies over the center console screen, or change lanes by themselves on the highway. (Musk has said that a Tesla purchased today has all the hardware needs to become a totally self-driving car—once, of course, the company pushes out the right software update.) Then it will suck back information about how that feature is working in the real world, and use it to perfect the product.

Last month, Tesla pushed out a feature called “Smart Summon,” which allows drivers to beckon their vehicles straight to them using the Tesla app, Knight Rider\–style. (The feature is only supposed to be used on private parking lots or driveways, and then only when the driver has their eyes on the vehicle.) And though “Smart Summon” is still in beta, Musk said on Wednesday that it has now been used over 1 million times. That means the company has captured 1 million maneuvers' worth of data on weird parking lot happenings, and can use it to make its cars drive themselves around parking lots with more, well, confidence.

Some improvement is needed: Videos posted on social media show one Tesla that can’t tell the difference between grass and asphalt, and another that almost dinged another car, and yet another that actually did. (The US Department of Transportation told Reuters earlier this month that it is aware of the Smart Summon feature and is “gathering information” about its safety performance.)

Still, lots of Tesla drivers are plenty pleased with the way Smart Summon has navigated their precious vehicles through parking lots, and according to experts, that’s impressive. Turns out navigating a parking lot is one of the more difficult things Elon Musk—or anyone, really—could ask a self-driving vehicle to do. “In terms of solving parking lots where [the tech] does it perfectly, better than humans? Yeah, that’s hard,” says Matthew Johnson-Roberson, assistant professor of engineering at the University of Michigan and the cofounder of Refraction AI, a startup that is building autonomous delivery vehicles. No wonder those fender-bender videos are floating around the internet.

In fact, parking lots are one of the most human places you could put a car that doesn’t need a human to drive. Their rules are not always consistent, and drivers, moreover, don’t always follow them. They’re full of little people-to-people interactions: a wave to let the dad behind the stroller that that you’re going to stop and let him cross; a nod to tell the other driver to inform him that you’re waiting for this woman fiddling with her keys to finally pull out of her spot. These are very complicated things for computer systems to learn, even if they’re trained on tons and tons of real-life parking lot data.

If the parking lot is underground, the car also might lose a valuable source of data—its access to GPS. In that case, it’s up to the car to “localize,” or figure out where it is relative to other cars and people and shopping carts and walls, based on its other built-in sensors. (Today’s Teslas, for example, come with eight cameras and a forward-facing radar unit, all of which help the car “see.”)

Ask WIRED How do self-driving cars see me?

Francesco Borrelli, a professor who studies automotive control systems at UC Berkeley, says this “localization” task is especially hard in a parking lot or structure. “Localizing yourself in tight spaces only with cameras might be hard,” he says. “That is a fact. If you had laser scanning, it would be easier.” Musk has argued that Tesla vehicles will one day be able to operate completely autonomously without expensive laser sensors, or lidar—so the cars come without it.