Every day for 10 months, Knightscope K5 patrolled the parking garage across the street from the city hall in Hayward, California. An autonomous security robot, it rolled around by itself, taking video and reading license plates. Locals had complained the garage was dangerous, but K5 seemed to be doing a good job restoring safety. Until the night of August 3, when a stranger came up to K5, knocked it down, and kicked it repeatedly, inflicting serious damage.

Robots engender human sympathy. Seen in the wild, they appear to have agency, feelings, and desires. R2D2’s spunk, C3PO’s intelligence, Wall-E’s charm. When delivery bots get stuck on the sidewalk, good Samaritans help them get unstuck. In light of the attack on K5, then, you may be thinking: Poor guy.

The 5-foot-tall bot, whose shape is often described in phallic terms, indeed projects a certain charisma. Wobbling over the uneven pavement at 3 mph, it looks more like a bumbling neighbor than a robocop. When one K5 stationed in Washington, DC, rolled itself into a fountain in summer 2017, the internet worried it was suicidal. A writer at The Verge affirmed its choice: “I wouldn’t want your job either, K5. Live your truth.”

Sure, sometimes people do get in the way. They’re curious. What’s this thing for, anyway? They’ll follow the robots to see what they do or tap their buttons to see what happens. “People want to explore them, and they don’t know how to do that,” says Bilge Mutlu, who runs the University of Wisconsin’s Human-Computer Interaction Lab. Rarely do the interventions cause damage.

The incident on August 3, though, was not a case of poking around. Though the identity of the assailant remains unknown, video captured just before K5 crashed to the concrete shows a blurry image of a young person with dark hair running past the camera. This was likely premeditated.

K5’s siblings, it turns out, don’t fare much better. In 2017 a drunk man attacked a K5 in a Mountain View parking lot. A few months later a group of angry protestors in San Francisco covered another one in a tarp, pushed it to the ground, and smeared barbecue sauce on it. Stacey Stephens, Knightscope’s executive vice president, wouldn’t say how many have been seriously damaged. “I don’t want to challenge people,” he says, afraid any number will inspire—perhaps compel—more miscreants to seek out K5s. (Stephens did specify that Knightscope prosecutes "to the fullest extent of the law," often pursuing felony charges for damaged K5s.)

Hard numbers or not, the assaults will continue—that’s not the question. A brief history of humans and robots bears this out. Children bully them. Philadelphians behead them. In one incident, office workers bullied an HR chatbot so terribly that management wondered if the workers should be fired. Humans are mean to robots. The question is: Do we care? “We wouldn’t be having this conversation if people didn’t clearly view or treat robots differently than other devices,” says Kate Darling, a robot ethicist at MIT. “If people were going around smashing security cameras, you wouldn’t have called me.”

Quite right. The vandalism of security cameras rarely piques my curiosity. But as an otherwise law-abiding citizen—I’m the kind of person who carefully refolds and reracks clothes after trying them on—all I could think as I watch and rewatch the security video from August 3 is: Way to go, dude.

Because K5 is not a friendly robot, even if the cutesy blue lights are meant to telegraph that it is. It’s not there to comfort senior citizens or teach autistic children. It exists to collect data—data about people’s daily habits and routines. While Knightscope owns the robots and leases them to clients, the clients own the data K5 collects. They can store it as long as they want and analyze it however they want. K5 is an unregulated security camera on wheels, a 21st-century panopticon.