In the last decade, antitrust agencies have shifted their focus to deal with issues arising in the digital economy. While there are passionate discussions about the competitive effects of business practices implemented by digital players, the use of technological tools to address such practices remains very little debated. This disconnect between diagnosis and treatment is becoming problematic, as antitrust agencies struggle to remedy anticompetitive practices in increasingly complex and fast-paced markets.
The present article, available here, pursues three goals. First, it introduces computational antitrust – a new domain of legal informatics which seeks to develop computational methods for the automation of antitrust procedures and the improvement of antitrust analysis. Second, it explores how agencies, policymakers, and market participants can benefit from computational antitrust. Lastly, it sets out a research agenda for the years ahead
Section I explains what is computational antitrust.
Computational law is a “branch of legal informatics concerned with the mechanisation of legal analysis (whether done by humans or machines)”. The main challenge of such an endeavour is that law, being the product of natural languages, cannot be fully codified. Despite this, multiple computational tools are currently deployed in legal fields – such as data mining, machine learning, deep learning simulations, natural language techniques, social epidemiology, document management, legal text analytics, computational game theory, network analysis, and information visualisation.
These tools and methods are still rarely used in antitrust, and competition agencies are only now acquiring the necessary expertise to deploy them. One would nonetheless expect computational tools to be widely adopted, and allow the integration into competition enforcement of insights from economic theory, business and management science, computer science, statistics, and behavioural insights. Accordingly, one should explore where and how to develop computational antitrust—a specialist field of computational law that purports to improve antitrust analysis and procedures with the assistance of legal informatics.
Section II looks at the potential of computation antitrust.
Since new technologies—such as powerful AI systems and blockchain—can help market players implement and sustain anticompetitive practices, the use of computational tools as a proactive response is becoming necessary. Computational tools will enable agencies to process data more efficiently and to better understand anticompetitive practices, thereby enhancing agencies’ analytic capacities; allow large data sets to be compared across different periods and industries; and provide companies with the means to conduct more effective internal audits. In particular, computational antitrust could support new forensics capabilities and enhance the ability of agencies to detect anticompetitive infringements. The development of new market screening tools could help identify anticompetitive patterns and behaviours, while natural language techniques could automate the identification of illegal practices and intentions when analysing companies’ internal documents.
Computational antitrust can also play a role as regards merger control. Computational tools can assist in the timely processing of the enormous amount of data that is often submitted by the merging parties – e.g. the European Commission has examined over 2.7 million documents in the Bayer / Monsanto merger. Such tools can also help address asymmetries flowing from the fact that the merging parties are in control of the data, and provide clarity in how the agency assesses the data. One could think of a systematised communication tool between companies and antitrust agencies that would ensure that companies send all information in specified databases to agencies in real-time. One could also use blockchain to ensure database integrity.
Finally, computational antitrust can improve antitrust policy in general. Such tools will improve retrospectives of antitrust investigations, merger control decisions, and public policy initiatives. Further, computational antitrust may enable a more effective monitoring of the effectiveness of merger remedies and antitrust sanctions, and facilitate estimates of the impact of competition intervention on consumers.
Section III considers the challenges facing computational antitrust.
Some challenges are common to (the use of) computational methods in all legal fields, particularly those regarding opacity in decision-making and the potential for computational resources to tilt the playing field. Transparency could ensure that computational methods do not undermine existing legal guarantees and due process. Further, procedural fairness ought to be maintained when sophisticated tools are available to only one of the parties to a dispute.
Other challenges are specific to antitrust. One such challenge concerns the development of the right tools. Coding antitrust laws and rulings to create efficient methods for (consistently) assessing antitrust compliance, and help agencies to automate enforcement and merger control procedures, are obvious goals for computational antitrust. Another set of challenges concerns the use of these computational antitrust tools. For example, one will need to identify the data (and data structures) that companies and agencies will need to feed enforcement tools. Further, there will be difficulties concerning the use of ‘predictive’ computational tools, which are bound to be reflected in debates about the role that computational tools should play in decision-making. One will want to discuss the extent to which these tools will be able to adopt or justify decisions, including their probative force in anticompetitive investigations and mergers.
Eventually, technical questions will get technical answers. The most critical challenge for developing computational antitrust concerns the interaction between our legal systems and technical tools. This is a human challenge. This collaboration requires coders (computer scientists, data scientists, developers) and the antitrust community (companies, policymakers, regulators) to prepare their respective fields so computational antitrust can thrive.
This is a nice issues paper that sets the scene for a workstream at Stanford on the development of antitrust tools that incorporate insights from computer science. The subject is undoubtedly topical – indeed, as I read the paper I found myself thinking that a number of these proposals are already being adopted by competition authorities around the world. Of course, the paper merely sets the stage for future work and research on how to develop and implement these tools in the future, and some of the paper’s proposals are beyond anything contemplated today. I look forward to seeing how this project evolves in the next few years.