If We're Not Careful, Self-Driving Cars Will Be The Cornerstone Of The DRM'd, Surveillance Dystopias Of Tomorrow

from the who-controls-the-code dept

"Among other things, the lawsuit alleges Toyota, Ford and GM concealed or suppressed material facts concerning the safety, quality and functionality of vehicles equipped with these systems. It charges the companies with fraud, false advertising and violation of consumer protections statutes. Stanley continued, "We shouldn't need to wait for a hacker or terrorist to prove exactly how dangerous this is before requiring car makers to fix the defect. Just as Honda has been forced to recall cars to repair potentially deadly airbags, Toyota, Ford and GM should be required to recall cars with these dangerous electronic systems."

"The major attraction of autonomous vehicles for city planners is the possibility that they’ll reduce the number of cars on the road, by changing the norm from private ownership to a kind of driverless Uber. Uber can even be seen as a dry-run for autonomous, ever-circling, point-to-point fleet vehicles in which humans stand in for the robots to come – just as globalism and competition paved the way for exploitative overseas labour arrangements that in turn led to greater automation and the elimination of workers from many industrial processes.



If Uber is a morally ambiguous proposition now that it’s in the business of exploiting its workforce, that ambiguity will not vanish when the workers go. Your relationship to the car you ride in, but do not own, makes all the problems mentioned even harder. You won’t have the right to change (or even monitor, or certify) the software in an Autonom-uber. It will be designed to let third parties (the fleet’s owner) override it. It may have a user override (Tube trains have passenger-operated emergency brakes), possibly mandated by the insurer, but you can just as easily see how an insurer would prohibit such a thing altogether."

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

We've talked a lot about the ethical and programming problems currently facing those designing self driving cars. Some are less complicated, such as how to program cars to bend the rules slightly and be more more human like. Others get more complex, including whether or not cars should be programmed to kill the occupant -- if it means saving a school bus full of children ( aka the trolley problem ). And once automated cars are commonplace, can law enforcement have access to the car's code to automatically pull a driver over ? There's an ocean of questions we're not really ready to answer.But as we accelerate down the evolutionary highway of self-driving technology, the biggest question of all becomes: who gets to control this code? Will the automotive update process be transparent? Will the driver retain the ability to modify their car's code? Will automakers adapt and stop implementing the kind of paper mache level security that has resulted in the endless parade of stories about hacked automobiles it takes five years for automakers to patch?Trying to force the issue before there's a hacker-induced automotive mass fatality, Ford, GM and Toyota were hit by a class action lawsuit earlier this year claiming the car companies were failing to adequately disclose the problems caused my abysmal auto security:This month a court ruled that yes, we will have to probably wait for someone to die before automakers are held liable for lagging automotive security. The case was ultimately dismissed (pdf), the court ruling that the plaintiffs have yet to prove sufficiently concrete harms, and that potential damage (to the driver and to others) remains speculative. At the pace self-driving and smart car technology is advancing, one gets the sneaking suspicion we won't have long to wait before harms become notably more concrete.But however complicated these legal, ethical, and technical questions are, they become immeasurably more complicated once you realize that smart cars will ultimately form the backbone of the smart cities of tomorrow, working in concert with city infrastructure to build a living urban organism designed to be as efficient as mathematically possible. As Cory Doctorow noted last week, this makes ensuring code transparency and consumer power more important than ever You'd hate to wander too casually into the hyperbole territory traditionally reserved for hysterical Luddites, but there's a laundry list of reasons to be worried about the trajectory of the lowly automobile. If we don't demand code transparency and consumer empowerment in automotive standards now, your car may find itself the cornerstone of a future in which DRM, encryption backdoors, lax security standards, eroded consumer legal rights, insurance companies and government power combine to create a supernova of dystopian dysfunction.

Filed Under: autonomous vehicles, cars, drm, privacy, self-driving cars, surveillance