Algorithmic market conduct, and intervene where algorithms risk distorting competition. In effect, the collusive potential of algorithms and algorithm-driven resale pricing have already been the subject of enforcement. However, it is still not clear whether competition law has, in its present form, the necessary tools and techniques adequately to control algorithms.
This article, available here, looks at what other areas of the law, which are more advanced in this respect, can teach competition law.
Its second section looks at how financial markets regulation and data protection law deal with algorithm-based market activity.
Financial markets were among the first to deploy algorithms broadly and intensely. As a result, financial market regulation developed a comparatively detailed set of rules on algorithmic trading early on. European data protection law is another area that already has in place certain elements of a legal framework for algorithmic (market) activity. This includes the General Data Protection Regulation (GDPR) and the ePrivacy Regulation.
These two regulatory regimes share a number of commonalities as regards their treatment of algorithms.
– One can identify provisions aiming at documenting and ensuring transparency in the use of algorithms. This includes notifying authorities or clients/users that a company is engaged in algorithmic activity. The notified entities may, in turn, request information on this activity.
– Prevention and deterrence of adverse effects generated by algorithmic market activity are the subject of another set of rules. For example, data protection rules not only protect the data subject from decisions solely based on automated processing, but also requiring ex ante impact assessments for the processing of data that poses high risks for a natural person’s rights and freedoms. Ex ante testing and impact assessment are also extensively deployed by financial market regulation. For example, regulatory standards typically require monitoring exercises to be carried out by investment firms engaged in algorithmic trading and by trading venues.
– Where prevention has failed, ex post intervention can become necessary. Financial market regulations require constant monitoring of algorithmic transactions, the deployment of ‘kill functionalities’ permitting the cancellation of such transactions as an emergency measure, and the implementation of business continuity agreements preventing the break-down of business if disruptive events occur. Data protection rules also provide for enforcement and for significant penalties for their infringement, in addition to containing avenues for victims to obtain redress.
The third section sketches three prototypical competitive concerns as regards algorithms.
A first concern focuses on resale price maintenance, which amounts to a ‘restriction of competition by object’ in Europe. In recent decisions, the European Commission dealt with algorithm-driven resale price maintenance. In some cases, manufacturers used algorithms to monitor the resale prices of online retailers, imposing or threatening sanctions if the retailers diverged from the agreed resale prices. In other cases, retailers used pricing algorithms themselves to monitor their competitors. While the former practice is often deemed anticompetitive, the latter one is typically pro-competitive inasmuch as it increases the intensity of price competition in the market.
A second concern is with algorithmic collusion. This can take multiple forms. The use of identical pricing algorithms by competitors is not unlawful as such, but it can help competitors to align their prices unlawfully as part of a joint strategy for reducing competitive pressure. Competitors can also jointly implement a ‘hub and spoke’ cartel, for instance by delegating the setting of prices (and potentially other conditions) to a central, algorithmic agent. Signalling strategies employ algorithms to exchange concealed information about (planned) market behaviour, e.g. by exchanging pricing data which is being registered and possibly agreed upon. In practice, all these practices amount to old-fashioned cartels, and the main challenges raised by algorithms relate to the difficulty of assessing the likelihood of collusion, detecting it in specific cases, and assigning liability. A particular challenge in this regard consists of whether to assign liability to controllers of ‘sophisticated’ algorithms, which may not have decided to collude. Here, degrees of control over and knowledge of potential market outcomes could matter for determining whether those controllers infringed competition law.
A related concern in this area relates to tacit collusion. So far, competition law has broadly tolerated the detrimental economic effects of tacit collusion. This is at least partly because the market conditions under which it can occur are rather demanding, and thus the negative effects on consumer welfare are limited. However, the increased use of algorithms may change this assessment in several ways. Algorithms can rapidly analyse large amounts of data and, consequently, make it easier to coordinate the behaviour of several market players. Despite their potential to increase competition on other dimensions, algorithms can be a catalyst for the increased occurrence of collusion. If, in addition, (deep learning) algorithms were to make tacit collusion more likely as well, competition law’s present approach towards tacit collusion may have to be reassessed.
A last concern relates to the use of algorithms by dominant market players. This has led to enforcement in cases such as the BKAs Lufthansa investigation or the European Commission’s Google Shopping decision.
The article’s last section discusses whether competition law needs to be updated to deal with algorithms.
Learning from other areas of the law will help competition law to monitor algorithmic market activity to ensure that it leads to beneficial, dynamically efficient outcomes. The authors suggest a number of steps that competition law can take to this end.
First, competition authorities should, in line with what financial authorities have done, improve the factual and analytical foundation on which they base their decisions and policies. This could include pursuing market studies or inquiries, carrying out testing exercises regarding algorithms or AI systems, and generally build up technical expertise in this area.
A second step would be to require, in line with recent proposals concerning data protection, that undertakings structure their digital tools so that they operate in a procompetitive manner (pro-competitiveness by design). Furthermore, the procompetitive configuration of an algorithm should form the pre-installed standard configuration (pro-competitiveness by default). The real challenge will be, of course, to identify what the design and the default should be in complex scenarios, particularly concerning deep learning algorithms. The setting of standards for algorithm design by stakeholder-based organisations, potentially including certification to demonstrate compliance by design and default, could be a promising mechanism to address this difficulty.
Some have suggested firms should be required to submit their algorithms to the respective competition authority for ex ante analysis and clearance. An alternative approach would focus on a less intrusive, ‘results-based approach’ which would entitle authorities to intervene where they detect anticompetitive market outcomes, even if they do not (immediately) manage to prove flaws in algorithmic design or an intent to provoke anticompetitive effects. The risk here is that this may generate excessively strict liability absent limiting concepts.
Finally, the development of retail algorithms could come at the price of reduced consumer autonomy. Balancing such risks may require specific customer information rights and company transparency duties similar to those stipulated by data protection regulations. A core element of such duties could be the obligation to explain the workings of an algorithm thoroughly regarding its impact on the customer, especially where it is designed to replace customer choice.
Instinctively, I like the idea of looking at other areas of economic regulation to derive insights for how competition law should deal with algorithmic practices. I think the authors do an interesting job in this regard, but I would have enjoyed a more clear articulation of the principles they derive from existing regulation and how they would apply to competition law.
There are a number of topics where I would quibble with the authors’ assumptions. For example, I am not sure that the reasons why competition law does not sanction tacit collusion relates solely to the limited set of circumstances in which it can occur. I always thought it was mainly because tacit collusion is a natural market equilibrium – and virtually indistinguishable from perfect competition in terms of price parallelism – and that the challenge was to distinguish such an equilibrium from a collusive outcome. The author’s assumption leads them at looking solely at whether tacitly collusive outcomes are likely to be detrimental to consumer welfare – which of course they are – when I believe more appropriate questions are whether we should look for new indicators of collusion or change the rules regarding the allocation of liability when dealing with algorithmic-induced market equilibria.
I was also puzzled by the authors’ continued reliance on Google Shopping as an instance where a competition authority dealt with algorithms by focusing solely on their outcomes. I think it is perfectly arguable that, if algorithms lead to behaviour with anticompetitive effects, this may be deemed anticompetitive; but the anticompetitive practice identified by the Commission was that Google had exempted its service from its algorithm, and that the outcome of this – not the design of the algorithmic or its effects – was anticompetitive.
In any event, this is an interesting paper. While clearly Euro-centric, it contains a number of insights that can also apply to other jurisdictions.