The Monopolies Commission is a permanent, independent expert committee which advises the German government and legislature as regards competition policy-making, competition law and regulation. The chapter is already one year old, and can be accessed here.

Monopolies Commission

In data-intensive sectors such of the digital economy, pricing algorithms can facilitate collusion by automating collusive behaviour. For example, algorithms can stabilise collusion by allowing the collection of information on competitors’ prices and sanctioning deviations from collusive market outcomes more quickly. The use of pricing algorithms can also render explicit anticompetitive agreements or concerted practices dispensable. As a result, difficulties with determining whether a concerted practice is actually taking place will increase with the use of pricing algorithms.

The Monopolies Commission considers that the use of pricing algorithms makes it necessary to strengthen market monitoring through sector inquiries. Since consumer associations are most likely to have indications of collusive overpricing, the Monopolies Commission recommends that consumer associations obtain the right to initiate competition sector inquiries. The Commission further recommends that, in the context of private claims, the burden of proof of damage be reversed when there are indications that the use of pricing algorithms has contributed considerably to collusive market outcomes and that competition rules are inadequate to address this type of collusive overcharges. Lastly, the Monopolies Commission recommends that the rules governing the liability for competition infringements of third-party providers of collusive algorithms be reviewed.

The report is structured as follows:

Section 2 reviews the characteristics of pricing algorithms.

Algorithms, for our purposes, are sets of instructions that take the form of software. Static and, less frequently, dynamic algorithms are already commonly used for pricing. Algorithms can also evolve, depending on the leeway granted to them. Self-learning pricing algorithms in particular are able to uncover patterns in data and independently find solutions to problems, and have the capacity to evolve beyond simply following pre-programmed rules. The effective exclusion of certain rules and outcomes is therefore difficult and may even be impossible, depending on the algorithm. Software code can, in certain circumstances, evolve to such an extent that there is no practicable way to trace the original code. This can create difficulties when investigating collusion.

Section 3 reviews the impact of algorithm on collusion.

In economic terms, collusion is typically understood as a market outcome in which competitors achieve higher profits by coordinating their behaviour than what they would achieve by competing. This is detrimental to consumers, who pay an overcharge, and to society at large, since the decline in consumer surplus is typically higher than the additional profits that companies derive from overcharges. Importantly for our purpose here, the effects of collusive practices are identical regardless of whether collusion is express, where the parties communicate with one another, or tacit, where the participants align their behaviour without direct communication.

In practice, successful collusion must overcome a number of coordination problems: (i) competitors must have a uniform understanding of the terms (e.g. prices and quantities) on which they want to offer their goods on the market; (ii) potential deviations from the agreed collusive target must be observable, since otherwise individual companies would have an incentive to deviate from the collusive outcome in order to maximise their own profit; (iii) there needs to be a credible threat of retaliation against those who deviate. Collusion is more likely to occur in markets where these problems can be addressed – either with or without communication between the parties.

The economic literature has identified a number of factors that can influence the possibility of collusion. These factors can be roughly divided into three categories (i) market structure (e.g., number of competitors, barriers to market entry, frequency of interactions and market transparency), (ii) demand side characteristics (development of demand, fluctuations and cycles in demand) and (iii) supply side characteristics (degree of product differentiation, symmetry of companies with regard to cost structures and production capacities, frequency of innovation, etc.).

The question here is whether algorithms can significantly impact these factors and facilitate collusion. The report identifies a number of elements that point towards the use of algorithms facilitating collusion.

A first element is the frequency of price adjustments: collusion is easier to maintain in markets where there are more frequent interactions, because deviations from the collusive outcome can be detected and sanctioned more quickly. Algorithms can be used to dynamically adjust prices very quickly. This enables companies to react to possible deviations in real time, or even to anticipate them. The relatively high speed with which it is possible to identify and react to deviations in algorithmic markets means that deviations are not profitable, and that collusion is more stable.

A second element concerns market transparency: high degrees of transparency can increase the likelihood of collusion. The use of algorithms can further increase market transparency, particularly in big data contexts. Algorithms also make it possible to distinguish more reliably between intentional deviations that should be sanctioned by the cartel, and market-related adjustments (e.g. to fluctuations in demand) that should not lead to sanctions from the other parties to collusion.

A third element concerns the number of market participants. A larger number of companies in a market makes collusion less likely, because it becomes more difficult to arrive at an agreement between all companies and to monitor their compliance with that agreement. However, the ability quickly to analyse large amounts of data using algorithms makes it easier to coordinate and monitor the behaviour of a large number of companies, making collusion possible in less concentrated markets.

Section 3 further tries to map out the different roles that algorithms can play in bringing about collusion.

A first role that algorithms can play is that of an instrument with the help of which collusive agreements are implemented. This includes the implementation of agreed price adjustments as well as the monitoring of agreements. The use of algorithms in such manner has been already sanctioned in jurisdictions such as the US and the UK.

A related role that algorithms can play is that of monitoring price movements outside the scope of express collusion, e.g. if the pricing software is configured to automatically undercut prices below a certain level. This practice would have effects akin to algorithmic signalling. Since algorithms allow this process to run much faster than before, the costs of signalling in the form of lost sales can be significantly reduced. Thus, the use of monitoring algorithms should lead to an increase in signalling of willingness to collude, and thus ultimately to increased collusion.

A second role that algorithms can play in collusion is to embody the collusive agreement itself. By using the same algorithm, companies could coordinate their behaviour for the purpose of collusion and automatically adapt their (collusive) behaviour to changing market conditions.

A variant of this scenario is the use of algorithms in so-called star or hub-and-spoke cartels. These are characterised by the fact that not (only) competitors at the same value-added level align their behaviour horizontally, but that this behaviour is organised by a central actor (hub) related to these companies (spokes). In such a constellation, the same or similar algorithms can be used to ensure that the participants align their behaviour.

Despite its identification of risks of algorithmic collusion outlined above, the Monopolies Commission concludes that the influence of algorithms on collusive market outcomes will ultimately depend on the structural characteristics of a market, as well as other supply and demand considerations. It is only by taking these factors into account that one may determine whether algorithms promote collusion in individual cases.

Section 4 looks at whether the legal framework should be adjusted to address the risks of algorithmic collusion.

Competition rules do not prohibit collusive outcomes. Instead, competition rules assign liability only as regards market behaviour that restricts competition and is under the control of market participants. In this respect, and while the use of algorithms would not exempt collusive practices from competition law scrutiny, there are two challenges that complicate the application of competition law to pricing algorithms. First, the creation of collusive algorithms may only require the definition of a goal function (e.g. profit maximization) which will make it hard to identify anticompetitive conduct. Second, collusive market outcomes can come about not only because of anticompetitive behaviour, but also of rational parallel behaviour enabled by these algorithms.

 So far, competition enforcement has been mainly concerned with relatively simple static pricing algorithms, which market participants have used to pursue strategies that would have been anticompetitive irrespective of the algorithm. However, the characteristics of pricing algorithms may make it more difficult to apply competition rules in the future. This leads to the question of whether the competition rules, as they stand, are fit for purpose.

The use of pricing algorithms may lead to collusive market outcomes occurring more often than before. However, while explicit collusion is prohibited by existing competition rules, tacit collusion is only caught by the prohibition of abuse of dominance under strict conditions related to joint dominance. This is reflected in how practices involving algorithms have only been sanctioned thus far when the algorithm reflects a pre-existing joint intention to coordinate market prices. Yet, the increasing use of pricing algorithms makes collusion-related consumer damages more likely in the future. At the same time, algorithms may dispense with the need for agreements or agreements restricting competition. It is therefore conceivable that algorithms will be used in the future to achieve the effects of a cartel without requiring the parties to enter into restrictive agreements or concerted practices.

The Monopolies Commission thus recommends the adoption of rules to neutralise as far as possible the potential contribution of pricing algorithms to collusive market outcomes. A first suggestion is to allow consumer associations to request sector investigations where collusive algorithmic pricing is suspected. The task of competition authorities is limited to enforcing competition rules, which do not prohibit collusion-level prices as such, despite such price levels being able to lead to consumer damage. As such, consideration should be given to allowing consumer associations to request antitrust investigations into certain sectors when there is a suspicion of consumer-damaging collusion. The competition authority would be able to reject the application in a number of circumstances, including where the costs of pursuing a sector investigation would be disproportionate.

A second possibility would be to impose a burden on parties investigated for algorithmic collusion to prove that the use of an algorithm has not contributed to a competition infringement when there is prima facie evidence of anticompetitive behaviour. Such an option should only be adopted, however, if concrete evidence emerges that the use of pricing algorithms significantly furthers collusive market outcomes, and that the enforcement of competition rules is inadequate to protect consumers from such damage.

A related option would be to create a refutable presumption of cartel damage in all cases where pricing algorithms are used for infringements of competition law. This reversal of the burden of proof means that the defendant would have to prove that its pricing algorithm did not contribute to the claimed damage. In cases of joint dominance, this reversal of the burden of proof would additionally reduce the requirements for plaintiffs to prove that an abuse led to price increases. It should be noted that this reversal of the burden of proof would only apply to private claims, and not to public proceedings. However, there is a risk that such a reversal would amount to a form of strict liability in the context of complex pricing algorithms (especially self-learning algorithms), where users do not control the price-setting behaviour of such an algorithm. This risk could theoretically be reduced through appropriate algorithmic selection or design (compliance by design). However, such requirements would impose algorithmic pricing standards, impair the ability of companies to adjust their pricing to market conditions and create market entry barriers through regulatory costs which do not seem to be justified at present.

A last proposal concerns the liability of providers of pricing algorithms. While such third-party providers may be caught by competition law in some circumstances – particularly, when they knowingly assist in the implementation of a collusive arrangement – such providers may escape liability in others – e.g. when the IT service provider knowingly or incidentally brings about a collusive market outcome without the approval of the parties involved. The Monopolies Commission recommends that the rules on third party liability be reviewed, so that liability does not need to depend on the behaviour of pricing algorithm’s users alone, but can also be established exclusively on the basis of the behaviour of the IT service provider itself. This would induce IT service providers to refrain from designing collusion-promoting algorithms, and to modify algorithms which, during their use, give rise to competition issues.

 

Comment:

This is an official document, so I am limited in what I can say. I think it provides a good overview of European rules on anticompetitive collusion, and of the ways in which algorithms may influence the enforcement of those rules.

The recommendations are cautious, to the point of not addressing the elephant in the room – whether the requirement of a ‘meeting of minds’ makes sense in an algorithmic world.

I was also surprised by how the proposals focus on private enforcement.  While I think that the impact of procedural and evidence rules on substantive competition law is seriously underestimated, I do not see how the Commission’s proposals concerning private enforcement would deter algorithmic collusion without a change of the substantive rules which must be applied in private and public enforcement alike. In particular, I fail to see how a presumption of damage can assist in preventing anticompetitive outcomes, since the main challenge is demonstrating that algorithmic collusion infringes competition law.  Without such a finding that an infringement occurred, there would be no ground for a claim for damages, and, as a result, a presumption of damages would not operate.

Author Socials A weekly email with competition/antitrust updates. All opinions are mine

What do you think?

Note: Your email address will not be published