Here’s how Waymo describes “fleet response.”

“If one of our vehicles detects that a road is blocked up ahead, it may come to a stop and request confirmation from our fleet response team before plotting an alternate route,” the company wrote in a Medium post this week. “Once the blockage is verified, our vehicle decides the best way to proceed, and our specialists can then share this intel with the rest of the fleet so that our vehicles can route trips to avoid this area.”

In some ways, these humans are the answer to some of the long tail of what-ifs that come up when people think about robot cars. How could they possibly know what to do in every situation? What if a streetlight is out? What if there’s construction? In those cases, the car can simply dial a human and have them verify the road conditions, then replan its route. I imagine they sit there like security guards, in front of a bunch of screens, waiting for the pings from the robots. Or perhaps they are served up serially: What about this situation? How about this intersection? What’s going on here?

“Rider support” is the other team Waymo is building. They sound as if they’ll serve as a disembodied “driver”—not in the sense of operating the cars, which they can’t, but in the softer tasks of driving. In a normal ride-hailing service, humans not only pilot cars, but also maintain relations with the ride-hailing customers and monitor the interiors of their vehicles. That’s not within the scope of the Waymo robot’s operation. How could it know if someone throws up in the backseat? So, Waymo riders can push a button to talk to someone, who can see them with cameras installed in the cars.

What’s important about all this is that driverless cars may eliminate the “driver,” but, as with self-driving trucks, they can’t squeeze all human labor out of the system. As Waymo’s service scales up in Phoenix and then around the country, the human infrastructure will need to keep pace.

It’s possible that robocar tending will follow the trajectory that we’ve seen with content moderation at scale on Facebook, Twitter, YouTube—another Waymo corporate sibling—and other social networks. While the companies’ algorithms can distribute information automatically and with greater engagement than any human editor, they have little wisdom about the nature of what they’re distributing. Even as the AI tools that identify violent or otherwise objectionable content improve, more and more humans are still needed to make judgment calls.

And as with content moderation, it would not be a total shock if Waymo and all the rest of the self-driving-car companies underestimate how important human labor will continue be to their systems.