Image can be found at: https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capitalism-google-facebook

Rapid technological advancements and the increasing interconnectedness of our digital economy are drastically altering the societal landscape. At the time of our nation’s founding, one of the chief concerns among the Founders was the threat to individual liberty posed by the government. The Founders included the 4th Amendment, the “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated…but upon probable cause,” in the Bill of Rights. The Founders intent has been long debated, but notoriously missing from this is the generalized right to privacy from the private sector.

In today’s digital economy, it is common to hear of the term “surveillance capitalism.” Harvard Business School Professor, Shoshana Zuboff, has written about this phenomenon. Professor Zuboff’s interpretation is that these new private sector business practices and techniques mine user’s information “to predict and shape their behavior.” As The New York Times notes, habit formation has progressed into a major field of research in neurology and psychology departments. Companies like Target are utilizing “the exhaustive rendering of our conscious and unconscious patterns into data sets and algorithms” have revolutionized how these companies can sell or market their products.

Professor Zuboff’s analysis connotes the belief that consumers are no more than “blind sheep” being herded by the large, powerful, and billion-dollar Silicon Valley companies. As such, Professor Zuboff feels it is her moral imperative to expose such techniques and to call for reform, of which I tend to agree. Dr. Robert Epstein has floated a similar theory about Google’s capabilities, which may demonstrate possible “election interference.” I discussed Google’s capabilities in a separate article titled, “This is How Google Undermines Democracy.”

Undoubtedly, the modern economy relies on the usage of Big Data and artificial intelligence (AI). The days of the old manufacturing economy are largely gone, automated away; today, most companies rely on technology and artificial intelligence to innovate. In turn, artificial intelligence has allowed companies to efficiently automate once common jobs using algorithmic rules to replicate tasks. This innovation is best explained as part of creative destruction.

Algorithms are a set of rules to solve problems. These rules utilize data, or information, to create a specific outcome. For example, inputting the words “Big Data” into a Google search will return with 6.9 billion links as a result. “In a fraction of a second, the [Google] algorithm sorts through hundreds of billions of webpages to find the most relevant results for the user. The search algorithm works by using a ranking system, which is made up of ‘a whole series of algorithms,’” I wrote at Medium. This is merely an example, but we come across these sorts of algorithms on a daily basis.

Professor Zuboff defines surveillance capitalism as “the unilateral claiming of private human experience as free raw material for translation into behavioral data. These data are then computed and packaged as prediction products and sold into behavioral futures markets.”

Professor Zuboff defines surveillance capitalism as “the unilateral claiming of private human experience as free raw material for translation into behavioral data. These data are then computed and packaged as prediction products and sold into behavioral futures markets.” YouTube search results best describe this phenomenon. Users will notice that after watching a video about their preferred interests that similar content will be suggested to that user moving forward. YouTube generates revenue through advertisements, which are run before and during the video. The more time a user spends on YouTube’s platform, the more revenue YouTube generates. What makes this market transfer unique is that users provide YouTube with only their screen time, rather than pay for use of service.

This interview inspired my research into the topic.

This now common market transfer is at the base of Professor Zuboff’s claim. Markets, through technological advancement, have now found many more ways to monetize consumer personal information and to predict future outcomes. For example, a smart watch is a technologically savvy timepiece that can measure your heart rate, track your fitness, respond to text messages and calls, and can show you directions. The information you input into this device may be shared with healthcare providers, marketing firms, and other third-party service providers. Have you ever searched for a product on a search engine and found that product advertised to you on a different platform? This is precisely the framework Professor Zuboff chastises.

The Internet of Things (IoTs) refers to “everyday devices” that are capable of connecting to other devices “through the existing Internet infrastructure.” IoTs and other technologically advanced products of today do contain inherent risks. With cameras, video cameras, microphones, and data collection in seemingly almost every new product, our world is susceptible to constant surveillance. Recently, the Federal Bureau of Investigation warned about “smart televisions” being able to monitor you. Similar stories have exposed Amazon’s Echo service listening in on consumers personal conversations. A New York Times article lays out how Target assigned unique codes, known as a Guest ID number, to consumers to keep “tabs on everything they buy.” This score, in essence, became a sort of “social credit score” for Target. This risk was never taken into account at our nation’s founding and, today, there are few laws governing the legality of such services. Government surveillance is governed by a much different set of laws.

Government surveillance became a very real phenomenon with the passage of the USA PATRIOT Act in 2001. Questions about the constitutionality of such surveillance tools linger, and revelations about the FBI’s abuse of the Foreign Intelligence Surveillance Act (FISA) process will only add to skepticism. Additional questions surround Representative Adam Schiff’s subpoenaing of the phone records of journalist John Solomon, Congressman Devin Nunes, and two of the President’s lawyers (all of which are classes of individuals long treated under “special rules.”).

In essence, the FISA process allows for the federal government to surveil, or in common parlance, spy, communications of those U.S. citizens suspected of acting as agents of a foreign power, upon demonstrating probable cause of violating a U.S. law. There are a whole host of programs authorized under FISA, but the most relevant is PRISM. PRISM utilizes the communications framework on the Internet infrastructure to search for specific key terms. The legislators’ intended to place sufficient safeguards, via regulations and standard operating procedures, to curb the possibility of non-implicated individuals being swept up in the Internet “dragnet.” As such, the Intelligence Community is tasked with putting in place minimization procedures.

The recent Justice Department Inspector General’s report has validated privacy advocates belief that “surveillance practices under the FISA law…lacked adequate oversight and transparency.” The New York Times lays out the implications of this report in an article titled, “We Just Got a Rare Look at National Security Surveillance. It Was Ugly.” Unfortunately, few parameters if any guide private sector surveillance.

A host of consumer privacy/advocate groups (ACLU, Electronic Privacy Information Center, and Electronic Frontier Foundation) exist to confront privacy abuses or unfairness. For example, the Electronic Privacy Information Center (EPIC) focuses “public attention on emerging privacy and civil liberties issues and to protect privacy, freedom of expression, and democratic values in the information age.” These organizations often litigate both the public and private sector over suspected abuses; they are also active users of the Freedom of Information Act (FOIA) process, which provides for government transparency when appropriate.

Today, these consumer privacy groups, and the Federal Trade Commission remain without a comprehensive framework to address surveillance at the private sector level. While the U.S. Intelligence Community is limited to the parameters laid out by the Foreign Intelligence Surveillance Act (FISA) and other relevant laws, the broader private sector is not as well contained. For example, facial recognition technology is particularly interesting. Obviously, the contours of our face are highly personal and largely unique. Assume that a company in possession of facial recognition technology data is breached, what can be done with the information? This is an extremely delicate but critical subject. Some municipalities are banning the technology outright, while others contemplated following suit. This poses the question of whether or not there are any societal benefits to facial recognition technology. Snapchat uses such technology to create filters on a user’s person. Does this constitute the sort of “malicious” usage we intend to ban? Likely not.

China is an example of a digital authoritarian state.

If the Hong Kong protesters have taught us anything, it is to be wary of surveillance in the interest of the “public interest.” The Wall Street Journal writes, “the cameras feed government databases in real time and, with the assistance of sophisticated facial-recognition software, Beijing eventually expects to be able to identify everyone, everywhere within three seconds of anything happening.” These proposals are framed to be in the interest of deterring crime.

Public surveillance in the United States differs. The public, broadly, disapproves of invasions of privacy. Recently, the U.S. Supreme Court accepted the writ of certiorari in Kansas v. Glover, a case in which EPIC has filed an amicus brief. EPIC warns that this decision has drastic downstream implications, notably “when combined with automated license plate readers.” For purposes of this article, the crux of the case involves the question over whether the officer had “reasonable suspicion” that the driver did not possess a valid driver’s license. The officer did not observe any traffic violations, however, he initiated the traffic stop under the assumption that the vehicles registered owner was driving the vehicle. Privacy advocates fear that automated license plate readers may lead to law enforcement stopping drivers without observing traffic violations. In this case, the usage of law enforcement surveillance tools may deter crime at the expense of liberty.

Similar problems exist with the concept of a “smart city.” A smart city, as described at Forbes, brings “together infrastructure and technology to improve the quality of life of citizens and enhance their interactions with the urban environment.” A “smart city” is reliant on the speed of the wireless network, which is why many privacy advocates are wary of 5G networks. This proposal intends to improve public transportation, effectively monitor “real-time energy consumption data,” and “provide accurate traffic reports.”

Google has proposed creating a real “smart city” in Toronto. Former Google executive, Eric Schmidt, spoke fondly of the idea in promoting all of the benefits of putting Google in charge of a city. Over time, problems have swelled as consultants and collaborators resigned. One collaborator referred to the Google project as a “Smart City of Surveillance” rather than a “Smart City of Privacy.” Her comments came as a result of information that this Google “smart city” called Quayside would allow third parties to “access identifiable information gathered at Quayside.”

Quayside, although located in Toronto, exemplifies the problems with “surveillance capitalism.” Unlike the European Union, the United States does not have a comprehensive legal framework governing the dissemination, disclosure, and collection of information, outside of a patchwork of laws governing specific sectors and the State of California. A surprise ally in the privacy debate has been the Trump Administration’s Justice Department. On December 10, 2019, The Washington Post reported that Attorney General William Barr is exploring “new legal tools to probe companies for their privacy abuses and the way they police content online.” It appears as if the Justice Department intends to scrutinize the major Silicon Valley companies; however, this does not address the root cause of the problem: the information sharing and collection practices of companies.

A lawful framework for governing data processing and control will be paramount as technology becomes pervasively more ingrained in society via the Internet of Things. One such proposal is to treat social media companies or information service providers as legally recognized fiduciaries, which “have a duty to exercise loyalty and care.” This is Yale Law Professor Jack Balkin’s “Information Fiduciary Theory.” The intent is to regulate specific companies as if they are legally recognized fiduciaries, which act in the interest of the consumer while retaining an incentive to produce a profit. This incentive structure leads us in the appropriate direction, but it would be best complimented using the fair information principles of consent, limiting collection, accuracy, and transparency. Privacy advocates fear that without sufficient parameters in place that the American private sector could develop a sort of private version of China’s social credit scoring (SCS) system.

With today’s growing surveillance apparatus in the private sector and with examples of authoritarian abuses of that apparatus, it is no wonder that many are worried about possible misuses of these tools. After all, the Nazi regime’s abuse of power gave way to the excessive focus on privacy in modern-day Germany. This threat is called “digital authoritarianism” and it is growing as a consequence of budding populist movements worldwide.

In the United States, we are fortunate for the 4th Amendment’s protections from government’s intrusions into our lives; however, this protection is only guaranteed so long as it is fought for. Privacy experts and civil libertarians must focus their efforts not only on government surveillance but also on the possible surveillance efforts of private companies. Service providers, state legislators, Congress, and experts must determine how to perform a cost-benefit analysis over the perceived benefits of these tools while minimizing intrusiveness. A great starting place is a serious debate about information collection practices and the consent of users. After Edward Snowden’s whistleblowing on the federal government’s practices, we must again renew this debate to address the inequities of current practices.

Mitchell Nemeth holds a Master in the Study of Law from the University of Georgia School of Law. He also holds a BBA in Finance from the University of Georgia. His work has been featured at Foundation for Economic Education, Merion West, and The Red & Black. His favorite writers are Professor Jonathan Haidt, Matt Taibbi, and Thomas Sowell.