AWS’ most controversial service is Rekognition, a platform that uses machine learning to analyze images and video footage. Among other features, Rekognition offers the ability to match faces found in video recordings to a collection of faces in a database, as well as facial analysis technology that can pick out facial features and expressions. A 2018 report from the American Civil Liberties Union (ACLU) highlighted how Amazon has been marketing its face recognition capabilities to law enforcement agencies, and has partnerships under way with police in Orlando, Florida, and Washington County, Oregon.

At a developer conference in Seoul, according to NPR, Amazon’s Ranju Das explained that police in Orlando have “cameras all over the city” which stream footage for Amazon to analyze in real time. It can then compare the faces in the surveillance video to a collection of mugshots in a database to reconstruct the whereabouts of a “person of interest.” A CNET report in March detailed how the Washington County Sheriff’s Office used Rekognition to identify and apprehend a shoplifting suspect.

One well-documented problem with face recognition software is inaccuracy, particularly when it comes to identifying people of color. In a 2018 test by the ACLU, Rekognition incorrectly found matches for 28 members of Congress to mugshots of crime suspects. Congress members of color were overrepresented among the false matches. Amazon argues the study is misleading in several ways, because it used a confidence threshold for matches that is lower than AWS recommends, and used an outdated version of the Rekognition software.

The idea of building A.I. tools and face recognition for the likes of Palantir and ICE doesn’t sit well with some of Amazon’s own employees and shareholders — especially at a time when immigrants are being separated from their families and children coming across the border are being denied basic human rights. An anonymous Amazon employee wrote in Medium last fall that some 450 workers had signed a letter to CEO Jeff Bezos calling on the company to drop Palantir and stop supplying Rekognition to police departments. “Companies like ours should not be in the business of facilitating authoritarian surveillance,” the employee wrote. In an accompanying interview, the employee said the letter had been met by Amazon’s leaders with “radio silence.”

That’s not surprising, given that AWS vice president Teresa Carlson had touted the company’s “unwavering commitment” to police and military uses of face recognition at a security conference in July 2018. Gizmodo reported in November 2018 that AWS CEO Andy Jassy reaffirmed the company’s marketing of Rekognition to law enforcement in an internal meeting. He portrayed the technology as largely positive, highlighting the work of an organization that is using the software to help find and rescue victims of human trafficking. Jassy added that if Amazon discovered customers violating its terms of service, or people’s constitutional rights, it would stop working with them. But when Carlson was asked previously whether Amazon had drawn any red lines or guidelines as to the type of defense work it would do, her answer was clear: “We have not drawn any lines there.”

In a February 2019 blog post, Amazon suggested that it would be open to some form of national legislation on face recognition to promote transparency and respect for civil rights, among other goals. That followed Microsoft’s far more direct calls for regulation of the technology in 2018. But where Microsoft made a stark moral argument for reining it in, conjuring images of a 1984-esque dystopia, Amazon has consistently defended its use. “In the two-plus years we’ve been offering Amazon Rekognition, we have not received a single report of misuse by law enforcement,” the company said in the blog post.

Surveillance as a service

Patent filings can’t tell you what a company is actually going to build. Often, a firm’s lawyers are just trying to cover as many bases as possible, in case some facet of their intellectual property might one day become relevant to business. But patents can still reveal what a given company thinks its competitive environment might look like in the years to come. And if Amazon’s patent filings tell us anything, it’s that the company sees the expansion of surveillance capacities as a major part of its future.

Some of these revolve around Alexa and voice recognition. One 2017 application describes a “voice sniffer” algorithm that could pick out keywords for targeted advertising from a conversation between friends. Another proposes to infer people’s health or emotional state from coughs, sniffles, or their tone of voice — again, with targeted advertising as a potential use case.

Amazon also has some big ideas for the future of Ring. Earlier this month, Quartz reported that Amazon received trademarks for devices that could cover cameras mounted on cars or on baby monitors, or “home and business surveillance systems.” A separate patent filed back in 2015 makes clear that Amazon has been thinking along these lines since long before it bought Ring: It described a system by which package delivery drones could be hired by customers to fly over a specified target and shoot spy footage. Amazon referred to the idea as “surveillance as a service.”

It’s doubtful even Amazon would ever roll out a product with a name or function quite that blatantly dystopian. But there’s one other set of patents, unsealed in November 2018, that sounds like something the company might actually be working on. They describe how a network of cameras sharing data could be used in tandem with software to automatically identify people whose faces appear in a database of suspicious persons. As CNN first reported, it sounds a lot like a roadmap to incorporating face recognition with Ring and the Neighbors app. An ACLU attorney described it as a “disturbing vision of the future” in which people can’t even walk down a street without being tracked by their neighbors.

In a statement to OneZero, Ring reiterated that a patent application doesn’t necessarily imply a product in development. “We are always innovating on behalf of neighbors to make our neighborhoods better, safer places to live, and this patent is one of many ideas to enhance the services we offer,” the company said.

Smart speakers, A.I. voice assistants, doorbell cameras, and face recognition in public spaces all have their upsides. They offer convenience, peace of mind, and the potential to solve crimes that might otherwise go unsolved. But put them all together — with one very large company controlling all the data and partnering closely with police and intelligence agencies — and you have the potential for a surveillance apparatus on a scale the world has never seen.

Assuming no one’s going to stop Amazon from building this network, the question becomes whether we can trust the company to be responsible, thoughtful, and careful in designing its products and guarding the copious amounts of data they collect — and whether we can trust all the entities that use Amazon’s technology to do so responsibly.

The cloud over us all

In contrast to Facebook, which has spent recent years apologizing for privacy lapses and pledging to fix itself — however ineffectively — or Apple, which has made privacy an explicit selling point, Amazon has so far shown little concern for the ethical implications of its sprawling surveillance capabilities. While Microsoft is declining to sell face recognition technology to police, and Google isn’t selling its face recognition technology at all, Amazon is hawking it to police departments around the country.

In case there remained any doubt about Amazon’s stance — or lack of one — on the societal responsibilities involved in developing its surveillance technologies, CTO Werner Vogels put it to rest at a company event in May. In an interview with the BBC, Vogels explained that it isn’t Amazon’s role to ensure that its face recognition systems are used responsibly. “That’s not my decision to make,” he said. “This technology is being used for good in many places. It’s in society’s direction to actually decide which technology is applicable under which conditions.”

That tone is set at the very top. At a tech conference in San Francisco in October 2018, Bezos framed the company’s military contracts as part of a patriotic duty. And he compared his company’s development of high-tech surveillance tools to the invention of books, which he said have been used for both good and evil. “The last thing we’d ever want to do is stop the progress of new technologies,” Bezos said, according to a CNN report. One might hope that the last thing Amazon would want to do is develop new technologies that cause harm, but apparently that’s of lesser concern to its executives than standing in the way of innovation.

Bezos did sound one brief note of concern, only to dismiss it in the same breath: “I worry that some of these technologies will be very useful for autocratic regimes to enforce their role… But that’s not new, that’s always been the case. And we will figure it out.”

The idea that innovation’s march is inevitable, and that it’s not up to companies to guide how new technologies they create are used, has been embodied in the strategies of tech companies for decades. It’s implicit in the ethos of “move fast and break things,” in the belief that it’s better for innovators to ask for forgiveness than for permission. But as the social costs of these innovations have grown heavier, once-transgressive platforms such as Facebook, Google, and Twitter — urged on by their own employees — have come to accept the view that they are indeed responsible, at least to some extent, for their products’ impact on society, and that they have some power to shape it proactively.

Amazon, it seems, has not joined them in this view.

Gilliard, the researcher who has been studying how Ring and Neighbors could affect minorities, says he suspects Amazon will not be able to maintain such a blasé attitude toward the impacts of its products — intended or unintended — much longer. “Amazon has not had their Cambridge Analytica moment yet,” he told me.

What might that moment look like for Amazon? Gilliard hesitated for a moment, and then sketched out a hypothetical scenario. He noted that Amazon’s Key in-home delivery service allows delivery people to drop off packages inside customers’ garages, using a smart lock. (It had initially tried dropping them off inside people’s front doors.) Gilliard imagines an Amazon delivery person of color entering a customer’s garage, and the customer getting an alert of a suspicious person from one of Amazon’s own products, such as Ring or the Neighbors app. The situation could turn ugly. “I really, really hope this doesn’t happen: I think someone’s going to get hurt,” Gilliard said. “An Amazon employee is going to get hurt, arrested, assaulted.”

In the absence of a massive, headline-dominating fiasco, or a sudden crisis of conscience on the part of Amazon employees and executives, the best defense against the company’s surveillance overreach might be regulation. In May, San Francisco became the first major U.S. city to ban face recognition. Defenders of the technology found the action premature and drastic. But Amazon’s own anonymous employee, whose identity was verified by Medium, warned that if we don’t act soon, “the harm will be difficult to undo.” It’s equally hard to imagine police departments giving up access to the Neighbors app once they’ve come to rely on it, unless they’re compelled to do so. And while one might hope that Amazon would stop short of putting face recognition in doorbells — or on cars, or on drones — its leaders have given us no indication that they see a problem with it.

Amazon has overcome societal trust barriers before. It launched as an online bookstore in 1995, just one year after the very first online purchase in Internet history. Amazon, along with eBay, helped to persuade the public that using their credit card online wasn’t as crazy as the skeptics thought.

Even as the company has expanded in seemingly incongruous directions, it has maintained a rigorous focus on streamlining processes that used to be cumbersome, from rapid and cheap delivery to controlling smart gadgets by voice. You can see the same impulse at work in the way Rekognition automates the onerous task of comparing a single criminal suspect’s face to hundreds of thousands of mugshots in a database, or how Ring makes it seamless to alert neighbors and the police when something suspicious happens on your street. But it’s worth asking, before we’ve made surveillance as easy and ubiquitous as a one-click Amazon purchase, whether society might be better off keeping certain tasks a bit cumbersome after all.

Update: An earlier version of this story misidentified the market category in which Amazon, via Ring, is the dominant player. It is the dominant player in the market for doorbell cameras. An earlier version misidentified a free feature of Ring as a paid feature. Users can watch live footage for free, or store and watch recordings for a fee. An earlier version did not make clear the precise nature of police access to the Ring system. They have access to Ring footage shared via the Neighbors app.