This paper is available here.

The efficacy of a market system is rooted in competition. Nothing more fundamentally undermines this process than collusion, when firms agree not to compete with one another and consumers are harmed by higher prices. The increasing delegation of price setting to algorithms has the potential to open a back door through which firms could collude lawfully. Such algorithmic collusion can occur when artificial intelligence (AI) algorithms learn to adopt collusive pricing rules without human intervention, oversight, or even knowledge.

A first section looks at human collusion.

Collusion among humans typically involves three stages. First, firm staff with price-setting authority communicate with the intent of agreeing on a collusive rule of conduct. Second, successful communication results in the mutual adoption of a collusive rule of conduct. A crucial component of this rule is retaliatory pricing: each firm raises its price and maintains that higher price under the threat of a “punishment,” such as a temporary price war, should it cheat and deviate from the higher price. Third, firms set the higher prices that are the consequence of having adopted those collusive pricing rules.

To determine whether firms are colluding, one could look for evidence concerning any of the three stages. However, evidence related to the last two stages—pricing rules and higher prices—is generally regarded as insufficient. Courts do not use the competitive price level as a benchmark to identify collusion. Likewise, it is difficult to assess whether firms’ rules of conduct are collusive, absent external evidence of a collusive arrangement. Furthermore, even if one could observe what looks like a price war, it would be difficult to rule out innocent explanations (such as a decrease in the firms’ costs or a fall in demand).

As a result, antitrust law and its enforcement have focused on the first stage: communications. Firms are found to be in violation of competition law when communications (perhaps supplemented by other evidence) are sufficient to establish that firms have a “meeting of minds,” a “concurrence of wills,” or a “conscious commitment” that they will not compete.

A second section looks at algorithmic collusion.

Concerns regarding algorithmic collusion have arisen recently for two reasons. First, pricing algorithms were once based on pricing rules set by programmers, but now often rely on AI systems that learn autonomously through active experimentation. After the programmer has set a goal, such as profit maximisation, algorithms are capable of autonomously learning rules of conduct that achieve that goal, possibly with no human intervention. Second, competitors’ prices are available to a firm in real time in online markets. Sustaining supra-competitive prices requires the prospect of punishing a firm that deviates from the collusive agreement. The more quickly the punishment is meted out, the less the temptation to cheat. Thus, rapid detection of competitors’ prices in online markets facilitates collusion leading to the emergence and persistence of higher prices.

These concerns have experimental and empirical support. On the experimental side, recent research has seen collusion emerge spontaneously in computer-simulated markets. In these studies, commonly used reinforcement-learning algorithms learned to initiate and sustain collusion in the context of well-accepted economic industry models. On the empirical side, a recent study has provided evidence of algorithmic collusion in Germany’s retail gasoline markets. The delegation of pricing to algorithms was found to be associated with a 20 to 30% increase in the mark-up of stations’ prices over cost.

The third section outlines a new policy approach.

Should algorithmic collusion emerge, society would lack an effective defence against it. Algorithmic collusion does not involve the type of communications that have been the route to proving unlawful collusion. Even if alternative evidentiary approaches were to arise, there would be no one to whom liability could attach.

There is an alternative path: to target the collusive pricing rules learned by the algorithms that result in higher prices. Algorithms can be audited and tested in controlled environments. One can then simulate all sorts of possible deviations from existing prices and observe the algorithms’ reaction in the absence of any confounding factor. In principle, the latent pricing rules can thus be identified precisely. This approach was successfully used by researchers to verify that pricing algorithms had learned the collusive property of reward (keeping prices high unless a price cut occurs) and punishment (through retaliatory price wars should a price cut occur). To show this, the researchers overrode the pricing algorithm of one firm, forcing it to set a lower price. As soon as the algorithms regained control of the pricing, they engaged in a temporary price war. Having learned that undercutting the other firm’s price brings forth a price war (with associated lower profits), the algorithms evolved to maintain high prices.

Focusing on the collusive pricing rules is the key to identifying, preventing, and prosecuting algorithmic collusion. Policy cannot target the higher prices directly, nor can it target communications that do not occur. Retaliatory pricing rules, however, may now be observable, as firms’ pricing algorithms can be audited and tested. The authors therefore propose that antitrust policy shift its focus from communications (with humans) to rules of conduct (with algorithms). One route is to make certain pricing algorithms unlawful. Another path is to make firms legally responsible for the pricing rules that their learning algorithms adopt. Firms may then be incentivised to prevent collusion by routinely monitoring the output of their learning algorithms.

Comment:

Let us take a step back – this paper was published in Science! Yes, that ‘Science’, the one that usually publishes pieces on how to map the genome and on super symmetry. Do not let anyone ever tell you that competition law and policy is not a hard science! I mean, we had to become a branch of computer science, but I will take it!

More seriously, this (short) note should be recommended reading for anyone with an interest in the topic. Building on another paper by its authors that I reviewed some months ago, it provides a very good overview of why algorithmic collusion is a problem and what should be done about it – without in any way sugar-coating how challenging the route ahead will be. Personally, I anticipate serious problems with establishing liability in these scenarios, if nothing more because of the quasi-criminal nature of antitrust enforcement against collusive practices and the related requirement for some type of mens rea. I would not be surprised if new rules were necessary, or even new regulatory approaches.

Further, these problems are not specific to competition law – after all, software (and algorithms) are eating the world. I expect quite a lot of cross-disciplinary fertilisation in this field. Even then, I would be surprised if the solution ever appears in the pages of ‘Science’ or ‘Nature’, but who knows…

Author Socials A weekly email with competition/antitrust updates. All opinions are mine

What do you think?

Note: Your email address will not be published