How competitive is our market economy? Not as much as it ought to be. And the growth of big data threatens to make things even worse. Antitrust regulators already struggle to keep markets competitive. How will they fare in a world where intelligent pricing algorithms subtly collude with one another?

Before we get to how pricing algorithms might collude, it’s worth reviewing the state of antitrust regulation in the U.S. We are increasingly realizing the market failures and shortcomings of U.S. antitrust policy over the past 30 years. In April 2016 the White House issued an executive order and report on the state of competition in the U.S. The report identified several disturbing signs of competition’s decline since the 1970s. Competition appears to be decreasing in many economic sectors, including a decades-long decline in the number of new businesses being started and in the rate at which workers change jobs. At the same time, many industries have become more concentrated, with profits increasingly falling into the hands of fewer firms.

These concerns have been noticed by The Economist, The Atlantic, and Harvard Business School. The solution is more competition, which traditionally has meant more-robust antitrust enforcement. But ensuring competition today means looking at its next frontier: our online e-commerce environment. It means understanding the shift from competition as we know it to the era of big data and big analytics, which is radically changing our markets and competitive ecosystem.

Big data, sophisticated computer algorithms, and artificial intelligence are not inherently good or bad, but that doesn’t mean their effects on society are neutral. Their nature depends on how firms employ them, how markets are structured, and whether firms’ incentives are aligned with society’s interests. At times, big data and big analytics can promote competition and our welfare by making information more easily available and by providing access to markets.

Insight Center The Automation Age Sponsored by KPMG How robotics and machine learning are changing business.

However, we cannot uncritically assume that we will always benefit. At times, technology may be used to defy competition. Take, for example, the evolution of collusion. Cartels are generally regarded in the antitrust world as no-brainers. The cartel agreement, even if unsuccessful, is typically condemned as illegal. If you fix prices, you have few if any legal defenses. In the United States, among other jurisdictions, the guilty executives are often thrown into jail.

So, what happens to cartels with the rise of pricing algorithms? Industries are migrating from the brick-and-mortar pricing environment (where store clerks once stamped prices on products) to dynamic, differential pricing where sophisticated computer algorithms rapidly calculate and update prices. Does that spell the end of cartels, or does it create new ways to collude?

Some argue the former. Cartels are often more durable than standard economic theory predicts. Why? Humans trust one another. “Collusion is more likely,” the U.S. Department of Justice noted, “if the competitors know each other well through social connections, trade associations, legitimate business contacts, or shifting employment from one company to another.” Computers do not exhibit trust. Instead, algorithms engage in cold, profit-maximizing calculations. If algorithms are less likely than humans to trust one another, maybe they’re less likely to collude, too.

However, there are other reasons to worry about algorithmic collusion. Pricing algorithms don’t have the capacity to trust. Nonetheless, by increasing the speed at which price changes are communicated, detecting any cheating or deviations, and punishing those deviations, algorithms can foster new forms of collusion that are achieved through subtler means, that do not amount to a hard-core cartel, and that are beyond the law’s reach.

We consider four scenarios in which computer algorithms may promote collusion:

The first scenario, messenger. concerns humans agreeing to collude and using computers to execute their will. One recent example involves posters sold through Amazon Marketplace. David Topkins and his coconspirators agreed to fix the prices for specific posters sold online. They adopted specific pricing algorithms that collected competitors’ pricing information. They also wrote computer code that instructed their algorithm-based software to set the posters’ prices in conformity with their illegal agreement. Under this scenario, humans collude. They use computers to assist in creating, monitoring, and policing a cartel. In the U.S. and elsewhere, they go to jail if caught.

Our second scenario, hub and spoke, is more challenging. Here we consider the use of a single pricing algorithm to determine the market price charged by numerous users. Uber illustrates this framework. Uber drivers don’t compete among themselves over price; some drivers might be willing to offer you a discount, but Uber’s algorithm determines your base fare and when, where, and for how long to impose a surcharge. This by itself is legal. But as the platform’s market power increases, this cluster of similar vertical agreements may beget a classic hub-and-spoke conspiracy, whereby the algorithm developer, as the hub, helps orchestrate industry-wide collusion, leading to higher prices.

The third scenario, the predictable agent, is even more challenging. In this new world there is no agreement among competitors. Each firm unilaterally adopts its pricing algorithm, which sets its own price. So we shift from a world where executives expressly collude in smoke-filled hotel rooms to a world where pricing algorithms act as predictable agents and continually monitor and adjust to each other’s prices and market data. The result, we explore, is algorithm-enhanced conscious parallelism — or, as we call it, tacit collusion on steroids.

Finally, in the most challenging collusion scenario, digital eye, we consider how two technological advancements can amplify tacit collusion, creating a new level of stability and scope. The first advancement involves computers’ ability to process high volumes of data in real time to achieve a God-like view of the marketplace. The second advancement concerns the increasing sophistication of algorithms as they engage in autonomous decision making and learning through experience — that is, the use of artificial intelligence. These two technological advances enable a wider, more detailed view of the market, a faster reaction time in response to competitive initiatives, and dynamic strategies achieved by “learning by doing.” Thus they can expand tacit collusion beyond price, beyond oligopolistic markets, and beyond easy detection. With our other three scenarios, we may know when something is amiss. In our last scenario, the contagion spreads to markets less susceptible to tacit collusion under the brick-and-mortar economy and beyond pricing to other competitive initiatives. In the end, with digital eye we may think the markets, driven by these technologies, are competitive. We may believe that tacit collusion in these markets isn’t even possible. And yet we’re not benefiting from this virtual competition.

The latter two scenarios, from an antitrust perspective, are troubling. Unlike humans, computers do not fear detection, possible financial penalties, or incarceration, and they do not respond in anger. The stability needed for tacit collusion is enhanced by the fact that computer algorithms are unlikely to exhibit other human biases. Human biases can always be reflected in code. But if some biases are minimized (such as loss aversion, the sunk cost fallacy, and framing effects), the algorithm will act more consistently and deliberatively than humans in quantifying the profits that are likely achievable through tacit collusion.

With the industry-wide use of computer algorithms and the resulting greater transparency of the marketplace, computers can more easily track the behavior of numerous rivals and anticipate and react to competitive threats well before any pricing change. Each firm’s algorithm determines whether it can profit by undertaking a competitive initiative. Under our scenarios, the algorithm concludes not. This is because the rivals, possessing the same technology, can quickly identify the competitive initiative and emerging threat and know when and how to retaliate. By responding quickly, the rivals deprive any would-be mavericks of the benefits of launching competitive initiatives, and thereby diminish the incentives to undertake them.

The algorithms, if similarly programmed, may better predict a rival’s response. Moreover, if the computers coalesce around a dominant strategy, each firm can detect and appreciate the type of algorithm others are using. The computers can uniformly and swiftly punish a rival’s deviations. With each algorithm sharing a common interest (profits) and common inputs (similar data), the industry-wide use of algorithms may lead to durable tacit collusion among many competitors.

These collusion scenarios are part of several anticompetitive outcomes that necessitate recalibrating our enforcement strategies. As we explore in our book Virtual Competition, big data and big analytics can enable some online sellers to engage in behavioral discrimination. We will also see the rise of a new frenemy dynamic, whereby many companies become increasingly dependent on the beneficence of the dominant superplatforms.

The future of virtual competition isn’t necessarily bleak. The transformative innovations from machine learning and big data can lower our search costs (whether we’re finding a raincoat or a parking spot), lower entry barriers, create new channels for expansion and entry, and ultimately stimulate competition. But these welfare gains aren’t automatic. Much depends on how the companies employ these technologies and whether their incentives are aligned with their customers’ and society’s interests.

Data-driven online markets will not necessarily correct themselves, nor will the anticompetitive effects be obvious. Dominant firms can be a step ahead in developing sophisticated strategies and technologies that distort the perceived competitive environment. Even with evidence that the markets aren’t behaving competitively, antitrust, while not obsolete, may prove unwieldy at times to apply. Without evidence of anticompetitive agreement or intent, an engaged competition agency will be hamstrung. So, our current antitrust laws may not deter some of the collusion scenarios we identify.

Accordingly, businesses and competition authorities must better understand how the rise of sophisticated computer algorithms and the new market reality can significantly change our paradigm of competition — either for better or for worse. We should explore new legal safeguards to promote competition in this new competitive environment. Otherwise, we will likely experience durable forms of collusion that are beyond enforcers’ reach, sophisticated forms of price discrimination, and an array of abuses by data-driven monopolies that, by controlling key platforms like smartphone operating systems, can dictate your company’s future.