This piece is similar to last week’s papers in that if focuses on the challenges posed by algorithmic tacit collusion, but arguably goes further. In previous work, the authors outlined four scenarios where algorithms may be used to facilitate collusion. There is a consensus that their first two scenarios – Messenger, where algorithms help humans collude; and Hub and Spoke, where a common intermediary provides the algorithm and the pricing decision mechanism that could facilitate collusion – pose competition issues that should be addressed under existing rules. Their third and fourth scenarios have proved more controversial. Under the third scenario, called Tacit Collusion on Steroids – The Predictable Agent, companies could unilaterally use algorithms with the intent to facilitate conscious parallelism (also known as tacit collusion). Under the fourth scenario, called Artificial Intelligence, God View, and the Digital Eye, algorithms may arrive at this anticompetitive outcome on their own.
Tacit collusion is beyond the reach of the competition laws of many jurisdictions, including the US and the EU. Some economists have further questioned the likelihood of tacit collusion in either the brick-and-mortar or the digital economy. They question whether companies in concentrated industries ripe for tacit collusion would have the incentive and ability to develop pricing algorithms for that purpose. These economists further argue that tacit collusion with three or more rivals – whether by algorithms or humans – is unlikely, as coordination problems are hard to solve without communication even in simple static games. According to this view, since algorithms cannot communicate to resolve this coordination problem, they cannot tacitly collude.
This paper, which is available here, challenges this view. It addresses new evidence of algorithmic collusion, and the gap between law and economic theory in this area. Ultimately, the paper argues that algorithmic tacit collusion is not only possible, but actually warrants the increasing attention of competition enforcers. It does so as follows:
Part II outlines the theory of how pricing algorithms, in specific market conditions, may foster conscious parallelism.
Everyone agrees that tacit collusion is a challenging area for competition enforcement, as it leads to an anticompetitive outcome (namely higher prices, reduced output or market sharing) without any illegal agreement among competitors. Although there is great variance in how jurisdictions interpret the notion of agreement, they traditionally require some sort of proof of direct or indirect contact showing that firms have not acted independently from each other (the so-called ‘meeting of the minds’). With tacit collusion (conscious parallelism), there is not any contact and, therefore, any illegal agreement.
Tacit collusion has taken another dimension with the proliferation of pricing algorithms. Many competition authorities recognise the risk that algorithms can facilitate and enhance tacit collusion.
At the same time, algorithmic tacit collusion – that is, the use of algorithms to unilaterally and rationally react to market characteristics leading to the interdependence of company behaviour – will not affect every (or even most) markets. First, algorithmic tacit collusion likely requires concentrated markets involving homogenous products where the algorithms can monitor to a sufficient degree competitors’ pricing, other keys terms of sale, and any deviations from a current equilibrium. A second important requirement is that, once deviation (e.g., discounting) is detected, a credible deterrent mechanism is in place. A third requirement is that “the reactions of outsiders, such as current and future competitors not participating in the coordination, as well as customers, should not be able to jeopardise the results expected from the coordination.” A fourth condition is that tacit collusion is more profitable than competition. The algorithm, when maximising profits, “would need to decide that it is a better course of action than competitive pricing, especially if competitive pricing leads to drastically larger sales volumes”.
This means that algorithmic tacit collusion will likely only arise in concentrated markets where buyers cannot exert buyer power (or entice sellers to defect), sales transactions tend to be “frequent, regular, and relatively small,” and the market is characterised by high entry barriers. To be clear, no bright line exists of when an industry becomes sufficiently concentrated for either express or tacit collusion. Indeed, competition agencies often struggle to predict when a merger may facilitate tacit collusion. Nonetheless, when the above conditions are present, the risk of tacit collusion is high. Importantly, the nature of electronic markets, the availability of data, the adoption of similar algorithms by key providers, and the stability and transparency they foster will likely push some markets that were just outside the realm of tacit collusion into interdependence.
Part III tackles the supposed instability of tacit collusion.
Some argue that, absent some communication, tacit collusion is inherently unsustainable even in the markets with the characteristics described above. According to this view, tacit collusion will rarely occur in the real world without some supporting communication. This means that, absent prior communication, tacit collusion is unsustainable because firms are unlikely to develop a mutual understanding over a collusive strategy. These assertions are often based on empirical observations under laboratory conditions, with perfect control and transparency over communications. Permitting communications in experiments, even briefly, increased the ability to sustain coordination regarding higher prices. Absent communications, tacit collusion was difficult, if not impossible, to sustain.
The authors argue that this view is inconsistent with the legal framework, has failed to persuade enforcers and courts with respect to tacit collusion in the brick-and-mortar economy, and is unlikely to gain traction in the digital economy.
In particular, when competition agencies or courts observe conscious parallelism that yields supra-competitive pricing, they do not assume that competitors must have communicated with each other to collude. In effect, the possibility of rational tacit collusion provides a defence against accusations of anticompetitive behaviour. The law recognises that, under certain market conditions, companies can rationally engage in parallel behaviour and behave as if they were colluding by adjusting to market characteristics without communicating, and that this will not infringe competition law.
The case law in the EU and the US provides many examples of conscious parallelism/tacit collusion where consumers were harmed but there was no evidence of communication among multiple competitors, and therefore parallel behaviour was not condemned as illegal. Instead, tacit collusion can only be attacked indirectly, by going after practices that facilitate tacit collusion or by targeting mergers that foster tacit collusion. In other words, there are many instances of judicial recognition that tacit collusion can arise without communication.
Part IV addresses doubts concerning the plausibility of algorithmic collusion.
Once one accepts the premise that conscious parallelism can occur without the communications that expose firms to antitrust liability taking place, then one must query whether algorithms can facilitate tacit collusion, and do so in a superior manner to that of humans. Some have questioned the ability of algorithms to stabilise tacit collusion. In particular, it is argued that the potentially large number of collusive equilibria presented by algorithms will decrease the likelihood of alignment in a repeated game. In addressing this argument, the authors distinguish between simple and complex algorithms.
“Simple” adaptive algorithms are programmed to monitor and “react”. Humans may program algorithms to reflect the logic behind conscious parallelism – by punishing deviations and following price increases. The authors emphasise how algorithms may lead to increased transparency, identify laboratory studies that demonstrate that this may lead to higher prices, and provide examples of actual situations where increased transparency led to price rises (e.g. requirements to disclose petrol prices in Chile and Germany).
The authors further note how the use of similar algorithms by different firms, and the ability to identify the strategy employed by other firms, may further stabilise parallel behaviour. For example, companies may rely on the same provider of algorithms, leading to something akin to an algorithmic hub-and-spoke arrangement. This ‘incidental’ hub-and-spoke algorithmic structure and similar unilateral strategies would, when executed carefully and absent illicit communication, not trigger antitrust intervention under current laws even if they were adopted with the goal of stabilising parallel behaviour.
Complex algorithms, including sophisticated self-learning algorithms, may rely on artificial intelligence to determine the optimal strategy autonomously. In this case, human attempts to stabilise parallel behaviour on the market would not occur. However, self-learning algorithms may nonetheless decide to adopt a strategy which may lead to price increases. The question here is whether in some future markets, tacit collusion could be sustained without human intervention. Interestingly, enterprising scholars, taking up suggestions to develop algorithmic tacit collusion incubators, are doing just that. The authors report some of their recent findings, focusing mainly on a paper I reviewed here last week. While still in the early stages of research, the findings suggest that competition authorities have reasonable grounds to be concerned about algorithmic tacit collusion.
Part V concludes.
Competition agencies must develop tools to assess (and deter) the risk of tacit coordination in some susceptible industries. This may require a distinction between how human and algorithmic tacit collusion are approached. This insight may also affect merger review. In markets where algorithms are present, competition agencies should consider lowering their intervention threshold and investigate the risk of coordinated effects not only in cases of 3 to 2 mergers, but potentially also in 4 to 3 or even in 5 to 4 mergers. Agencies may also have to reconsider their approach to conglomerate mergers when tacit collusion can be facilitated by multimarket contacts.
Enforcers and policymakers increasingly recognise that the current antitrust enforcement toolbox is limited in effectively deterring algorithmic tacit collusion. A refinement of the approach to signalling may be a good place to start. Restrictions on certain market manipulations (through bots that underscore parallelism) may be another. The issue should be approached in a measured manner, as part of the everlasting adjustment of competition enforcement to market and technological reality.
I really enjoyed this paper. At times, I wondered whether the outlining of the literature on the impossibility of tacit collusion was a bit of a straw man, but I am not familiar with it so I am unable to comment. Given the consensus in practice that tacit collusion can and does occur – which is very elegantly mapped out by the authors – I assume this literature is not exactly mainstream.
In any event, I think the paper provides a good overview of the conceptual challenges posed to competition law by algorithms that make it likelier that tacit collusion – and, hence, consumer harm – will arise.
I was slightly surprised by how conservative the proposed remedies were, however. As someone who is an incrementalist by temperament, I appreciate the caution with which the authors approach the matters. However, if the algorithmic developments outlined by the authors do take place, this would fundamentally challenge a number of the substantive underpinnings of competition law, one of which is that tacit collusion is relatively rare and can be addressed by focusing on attempts to modify the market structure by engaging in conduct that makes collusion more sustainable. Given that, we may need to start thinking about whether more incisive approaches will be required should algorithmic collusion become prevalent.