This paper, available here, undertakes a critical review of the prospect that self-learning pricing algorithms will lead to widespread collusion independently of the intervention and participation of humans. It reviews the arguments and evidence that self-learning pricing algorithms pose a new and significant threat to competition and antitrust enforcement. It argues that there is no concrete evidence, no example yet, and no antitrust case that self-learning pricing algorithms have colluded, let alone increased the prospect of collusion across the economy.

Part I explains why algorithmic collusion may be a problem.

Academic lawyers, who argued that algorithmic pricing poses a real threat to competition which cannot be dealt with by existing antitrust provisions, initiated a debate over the threat posed by algorithmic pricing. The prospect that pricing algorithms can facilitate collusion by firms is not the principal worry of this academic literature. Rather, the concern is with a class of machine-based algorithms that can collude without human involvement. Through self-learning and experimentation, these algorithms independently determine the means to optimise profit, which may result in the machines colluding with one another to set artificially high prices.

These authors further claim that, by using self-learning algorithms, the management of firms can escape antitrust scrutiny. Illegal collusion is based on an ‘agreement’ or ‘concerted practice’ between the human representatives of competing firms to fix their prices or other terms of trade. If machines can use algorithms that require no human supervision or intervention, and there is no communication with other firms, then these pricing practices will escape the antitrust laws. The academic discussion suggests that self-learning algorithmic collusion will expand the grey area between illegal and lawful tacit collusion by making the latter more likely.

Part II describes algorithms.

Algorithms are the ‘workhorses’ of the digital sector. Whenever we use a computer or go online, we rely on algorithms. They are ubiquitous. Algorithms take many forms such as monitoring, search, ranking, pricing, comparison, and data analysis algorithms routinely used by digital platforms and others offering services on the internet.

This article is principally concerned with the prospects and consequences of self-learning or reinforcement learning (RL) algorithms. In technical terms, these take certain actions in a known, partially known, or unknown dynamic environment, and observe the outcome of their actions in different scenarios on the different prices applied by competitors. Based on this feedback, the algorithm sequentially improves the objective function (profits) and identifies the corresponding optimal actions (prices). The algorithm is not trained in advance on the optimal choices in different situations but learns by doing instead. These ‘black-box’ algorithms often rely on Q-learning algorithms popular among computer scientists.

There is nothing inherently anticompetitive about a pricing algorithm. Such algorithms help generate huge benefits and are essential to the operation of the digital economy. Nonetheless, pricing algorithms can be deployed to facilitate collusion in several settings, such as: 1) to implement explicit collusion; 2) by a common third party (hub-and-spoke) to ‘facilitate’ collusion. In addition, self-learning pricing algorithms may unilaterally ‘collude’ to set supra-competitive prices. However, the term collusion is not synonymous with antitrust liability, since it can also cover tacit collusion or collusive outcomes absent communication amounting to an agreement or a concerted practice. In effect, the central issue in this debate is whether self-learning algorithms that have not been programmed to collude can learn to set collusive prices, and if so, whether this is legal or illegal under current antitrust laws.

Section III argues that there is no conclusive evidence of self-learning posing difficulties for antitrust.

Given the attention received by prospects of machine-based collusion, one would have thought that they were backed by strong evidence. This is not the case. There is no evidence, no examples, and only a few unpublished experimental studies to suggest that machine-based algorithms can collude.

There is no evidence that machine-based algorithmic collusion is a real-world problem or a serious future threat. While competition authorities across the world have taken the threat of algorithmic price collusion seriously, and addressed it in a large number of expert reports, these reports do not identify a threat from machine-based algorithms – instead, they identify this risk as speculative. There have also been no cartel cases using self-learning pricing algorithms, despite several cases where pricing algorithms were used to facilitate collusion.

There are also good reasons to be sceptical about the claim that machine-based collusion is likely to generate mass collusion. The type of algorithms used in the models underpinning these claims is not in widespread commercial use. The models themselves are based on restrictive and simplifying assumptions that do not reflect the complexity of real-world industries. Even under such strictures, these models find price effects that, even when supracompetitive, are not typically at collusive or monopoly levels. Finally, algorithms are typically also said to lead to price discrimination and personalised pricing, which is hostile to successful collusion.

Section V asks whether there is a legal gap.

It has long been argued that antitrust laws are under-inclusive – because they only catch collusion where there is an agreement or concerted practice between humans representing otherwise rival firms. Pricing algorithms are said to widen this ‘gap’. If machines can act autonomously to collude without human intervention, then the human agreement is absent. Therefore, self-learning algorithmic collusion will not infringe competition law.

However, the author thinks that this view is mistaken – and that companies and directors are liable for their pricing decisions, including those taken by their algorithms. Further, as regards the tacit collusion ‘gap’, the law evolved to reduce the evidential burden on competition regulators, economise on resources, and increase the likelihood of successful prosecutions. To the extent that this approach leads to more prosecutions of effective cartels, the law is not under-inclusive in practice. Nonetheless, as the critics point out, this makes it difficult to capture tacit and machine-based algorithmic collusion. The author believes that the concept of ‘concerted practices’ could be extended to encompass such situations.

Further, the author considers that it may be possible to bring ‘by effect’ cartels, in particular when there is indirect evidence to support a finding that parallel pricing is the result of collusion. While this could create evidentiary burdens for competition authorities, there are alternatives to deal with this. For example, there can be a requirement that the algorithmic codes must be made available to the competition authority in markets conducive to collusion and where RL algorithms are used. Some have even suggested adopting black lists of algorithms likely to lead to supra-competitive prices.

Comment:

This paper develops an elegant argument against over-emphasising the risk of self-learning algorithms. The abstract and introduction adopt a critical tone that is not reflected in the argument that follows – which openly admits that algorithmic collusion is a theoretical possibility that can be taken seriously, even if it does not yet call for practical action. In the end, I think this piece is in line with the papers above and their call for additional research.

The one area where I disagree is with the argument that there is no ‘legal gap’ should self-learning algorithms pose a problem. Self-learning algorithms leading to collusive outcomes are likely to create a gap similar to that concerning tacit collusion. Given that these algorithms are often black boxes, I fail to see how companies or directors may be liable under existing competition law absent evidence of underlying collusion. In effect, the fact that there may be a gap should self-learning algorithms pose a real problem for competition law and policy seems to be implicitly accepted in the author’s discussion of proposals that amount, in practice, to the ex ante regulation of some types of algorithms in some market circumstances. 

Of course, the case law might expand the definition of ‘concerted practice’, but I do not think that would work well here. For example, the proposal to rely on indirect evidence and parallel pricing is very similar to the ‘plus factors’ or ‘facilitating practices’ doctrines in the US, or even case law in the EU – so it does not really extend the law, as far as I can tell. Further, since the concern is mainly about self-learning algorithms leading to supracompetitive prices absent human intervention, I am not sure that this proposal would not cover such circumstances.

Author Socials A weekly email with competition/antitrust updates. All opinions are mine

What do you think?

Note: Your email address will not be published