An autonomous Uber prototype killed a pedestrian in Arizona, and a recently released video shows its minder being caught completely unaware and the car itself doing seemingly nothing to avert the crash. But when you look at the company’s extensive track record of being irresponsible, exploitative and downright scummy, it’s easy to wonder why anyone is okay with Uber testing self-driving cars at all.

Uber is a company that’s spent much of its existence mired in scandal. A company that has fought tooth and nail in the courts to keep its drivers as “independent contractors” rather than employees with benefits, no matter how many hours they work; whose former CEO exemplified all of Silicon Valley’s worst alpha-bro-dipshit tendencies, and had to be ousted after, among other things, he was caught on video berating one of his own drivers over pricing; whose entire existence is predicated on loopholes that allow it to be a taxi cab service with none of the legal requirements to be one; whose executives actively sought to discredit a woman raped in one of its cabs.


And now Uber wants us—drivers, pedestrians, cyclists, the people who use the roads every day, to say nothing of local and federal governments and regulators—to trust the self-driving cars it puts on our roads? I’m not seeing a reason why anyone should give this company a pass to do so.

Let’s step back for a moment and consider why Uber’s getting in the self-driving car game to begin with. Autonomy is often billed by tech companies, automakers, new urbanists and thinkfluencers alike as a key solution to traffic congestion and reducing traffic injuries and fatalities.


It’s all very cuddly and feel-good, and it conjures images of us being whisked around our futuristic cities in quiet comfort, reading tablets and getting work done as we never have to worry about driving ever again. Uber’s new CEO Dara Khosrowshahi has even said as much in Medium posts about the societal value of shared mobility’s potential to reduce pollution, deaths and loss of productivity from traffic.

In reality, it’s not that at all. It’s just another revenue stream for these companies. And no company’s future hinges on that more than Uber. It allows Uber to reduce its overhead costs and get rid of the peskiest part of running a business—the people.




You may not have known this the last time you used the service, but Uber probably paid for most of that ride, not you. Uber’s a money hemorrhaging operation and it subsidizes the true costs of rides in many markets. And Uber would be out potentially billions if it had to pay its drivers as employees with benefits and not as independent contractors. So the answer’s clear for Uber (and, it must be said, competitor Lyft): robot cars. You don’t have to pay the robots! Everyone wins, especially Uber’s shareholders.


Sure, you could cynically say that Uber doesn’t have any obligation beyond those shareholders. It’s here to make money. But it’s not even good at developing autonomous cars, and we—and this includes our elected and government officials—don’t have any obligation to let Uber use our roads as playgrounds.

Even in the self-driving car space, Uber admits its technology lags behind competitors by years, which is why it seeks a partnership with Google’s Waymo—even as it wraps up a bitter lawsuit where one of its former engineers was accused of stealing autonomous tech from the same company.


Uber’s not a car company, and it’s not an engineering firm—it’s an app company, full of guys who figured out how to get rich by skirting taxi regulations and taking advantage of a speculative frenzy. What has Uber done at all to give anyone confidence that it deserves to be testing these cars on public roads?

It’s true that Google’s doing the same thing, and that Google has its own set of scummy problems as a massive tech conglomerate, including its handling of private data. But at least Google can tread on the excuse of being the technological leader in the self-driving car field, having tested the tech now for a decade. Google’s far ahead of most.


And the actual car companies testing this tech like Toyota and General Motors and Mercedes-Benz have their own issues too. But they’re car companies. They build things. They’ve spent a century pushing the envelope of automotive technology, vehicle testing and safety, all while conforming to some of the most rigorous and confounding regulations on earth. They aren’t 30-year-old “disruptors” in jeans and blazers who insist that getting A-series funding means they know what’s best for all of humanity.

While Uber and the rest of the tech industry keeps fighting to let any sort of regulation take hold, to keep the libertarian Silicon Valley “let the market decide” fever dream keep going as long as it can, it is getting increasingly hard to fathom why we’re trusting them with something that has the potential for immense societal change instead of some large NASA-type organization. Or at least, some company willing to work under strict regulatory framework. The tech free-for-all must end at some point, and perhaps this horrible crash is the beginning of that.


Since last night, I keep going back to that video of the Uber crash in Arizona. It’s hard to watch. And aside from any issues with Uber’s self-driving technology and why it didn’t “see” the woman in the road—which I’m sure will be revealed in the coming weeks and months—it is also hard to fathom why the car’s human minder clearly wasn’t paying attention when she should’ve been.

That’s the thing that advocates of autonomous cars will say about human drivers: that we’re bad at it. And we are, generally. We get drunk. We text. We don’t pay attention. We get angry. We get tired. We do stupid shit. In theory, a machine will be better at driving than we are, because it doesn’t do any of those things.


But humans have accountability, too. When we make mistakes or commit crimes behind the wheel, we face fines, penalties, points on our licenses, or even jail and prison sentences. We are made to pay for our mistakes because that is how our system works.

When has Uber ever faced any accountability, or shown that it’s willing to do so? Ever? Until it can demonstrate that, maybe it should get its robot cars off our roads before it kills someone else.