Image can be found at: https://www.law.com/legaltechnews/2019/08/06/risky-business-should-governments-be-reviewing-tech-companies-algorithms/?slreturn=20190719210403

How will Artificial Intelligence transform our society and what are the costs? We are in the early stages of a Fourth Industrial Revolution, as presidential candidate Andrew Yang has acknowledged. This inquiry has arisen through nostalgic language about the old manufacturing economy and questions about the fairness of complex algorithms.

A Push for Algorithmic Transparency

An algorithm is a set of rules for solving a problem in a finite number of steps. What makes a computer algorithm potentially controversial is the non-transparent nature of operating one. While companies like Google discuss their process, at length online, the inner workings of this programming remain obscure. This lack of transparency has allowed accusations of bias and censorship to persist. The Electronic Privacy Information Center (EPIC) is a notable advocate for algorithmic transparency. On July 16, EPIC sent a letter to Senator Ted Cruz, the Chairman of a Senate committee, decrying the various dangers of Google’s search engine. EPIC claims that “content may be labelled and categorized according to a rating system designed by governments to enable censorship and block access to political opposition” and “the majority of users are unaware of how algorithmic filtering restricts their access to information and do not have an option to disable filters.”

The ability of online service providers to filter is well-known. There are consequences to this lack of transparency, whether done intentionally or unintentionally. Recently, presidential candidate Tulsi Gabbard decided to sue Google over allegations of “election interference.” Google acknowledged suspending Gabbard’s account following the first Democratic primary debate; however, Google claimed that its automated systems flag unusual activity on all advertiser accounts “in order to prevent fraud and protect our customers.” Google quickly reinstated her account. During a Senate Judiciary Committee hearing, Google’s Karen Bhatia, vice president of global government affairs and public policy, reassured Senator Ted Cruz that “we work hard to fix our mistakes. But these mistakes have affected both parties and are not a product of bias.”

A key detail missing from this discussion is that the business model of companies like Google relies on advertisers. “Advertisers and companies are risk-averse and seek to avoid controversy,” as I have previously written. Google’s platforms have collided with its revenue-making ability. Users, in some cases, seek a non-filtered platform, but Google must ensure some level of up-keep so that advertisers continue to utilize their platform.

Challenging Big Tech

EPIC’s arguments squarely challenge the alleged bias of these platforms. Progressives and conservatives alike are challenging Big Tech’s algorithms for anti-trust reasons, as well. While monopolies are generally created through vertical integration, some of these companies may be using their algorithms to artificially promote their own products. EPIC alleges that “after Google acquired YouTube, EPIC’s search rankings fell. Google had substituted its own subjective, ‘relevance’ ranking in place of objective search criteria.” The result being that “Google’s subjective algorithm preferenced Google’s video content on YouTube.” Even the Supreme Court has allowed application developers to sue Apple over allegations of antitrust violations. Developers claim that by requiring “iPhone and iPad users to download apps only from its portal while taking a cut of some sales made through the store,” Apple has acted as a monopoly.

On July 23rd, U.S. Department of Justice opened “a sweeping antitrust investigation of major technology companies.” ABC News notes that, “current interpretations of U.S. antitrust law don’t obviously apply to companies offering inexpensive goods or free online services.” As such, the Department of Justice may find it difficult to prove that these companies are monopolistic; the European Union, on the other hand, fined Google a record $5 billion for breaking antitrust laws. These federal investigations ring similar to the trust-busting that occurred in the late 19th — early 20th century.

Big Tech is rightly worried about the government intervening into their operations. It would be infeasible to request that each company be fully transparent with their algorithms, especially since this is akin to asking any other company to release their trade secret. Companies like Google can be clearer as to their intentions with their programs and algorithms. For example, Google’s Recaptcha has been using humanoid clicks to “train Google’s AI to be even smarter,” which they claim applies “the human bandwidth to benefit people everywhere.” This is an ingenious way of training algorithms, but most users had no idea that this was the case.

Algorithms are powerful; however, they are largely subject to the biases of the programmer. Programmers can exhibit either implicit or explicit bias, which is why more transparency would benefit the public, especially given the overwhelming volume of complaints.

With public trust in our institutions at lows, it would benefit companies like Google and Facebook to bolster such confidence. At The Guardian, Dylan Curran writes about the extent to which Facebook and Google store your information, “without you even realising it.” Using complex algorithms, these companies can piece together the various data points using your search history, application usage, and they even create an advertisement profile for you. In The New York Times, Jennifer Valentino-DeVries writes about how they use and sell data to advertisers and “even hedge funds seeking insights into consumer behavior.”

At The Atlantic, Kaveh Waddell writes, “automated systems make decisions based on vast troves of personal information, often without revealing the kinds of information included in the calculus.” It will be even harder to determine what information plays a key role in outcomes as these algorithms become more complex. Also, privacy advocates are rightly concerned about the possibility of a censored search engine, like Google’s Project Dragonfly in China, to emerge in the United States. Given the vast wealth of data held by many of these corporations, it is in our interest to ensure adequate protection and transparency. Should they act inadequately, government seem ready to intervene.

Congressional Debate

Congress is debating which legal regime is best to apply to these online service providers. Since Big Data is the fuel that Artificial Intelligence uses to run these algorithms, these companies must continue to both actively and passively collect data. The question remains: Where do the interests of the consumers come into play?

This is precisely why law professors Jonathan Zittrain (Harvard Law School) and Jack Balkin (Yale Law School) have proposed the “information fiduciary” theory, which would require some of these companies to act as legally-recognized fiduciaries. Other regimes include restricting the sale or transfer of data, removing Section 230 immunity, giving users the right to access data collected on them, banning addictive features, or forcing them to submit their algorithms to external auditors.

Additional regulatory burdens are non-direct taxes on consumers. Moving forward, these companies ought to incorporate more transparency into their algorithm’s decision-making, even though they will likely remain in the cross-hairs. True or not, these allegations of censorship shine light on the problems associated with Artificial Intelligence. In order to brave the populist backlash, Big Tech must do more to regain the public’s trust. They can start by acknowledging their past mistakes and doing more to prevent future breaches of trust.