The first European merger control regime came into force in 1990. Since then, merger control has evolved significantly. This paper, available here, employs a new dataset, comprising all merger cases until 2014 that led to a decision by DG Comp (more than 5,000 individual decisions). The goal of the paper is to evaluate the time dynamics of the European Commission’s decision procedures. Specifically, the paper assesses how consistently different arguments related to so-called structural market parameters – market shares, concentration, likelihood of entry and foreclosure – were deployed by the Commission over time.

The paper first estimates the probability of intervention as a function of merger characteristics. It finds that the existence of barriers to entry, increases in concentration and, in particular, the share of product markets with competitive concerns are positively associated with intervention by the Commission. After the reform of 2004, an effects-based approach centred on a clearly stated theory of harm became a cornerstone of EU merger control. Under such an approach, the reliance on structural parameters was expected to decrease, leaving space for the use of counterfactual analysis. In line with this, the paper finds that the importance of market share and concentration measures has declined over time, while the importance of barriers to entry and the risk of foreclosure has increased in DG Comp’s decision-making. However, the paper also finds that structural factors remain relevant in merger analysis.

The paper is structured as follows:

Section 2 reviews European merger control and recent studies on the empirical determinants of merger intervention.

Since 1990, mergers that meet certain thresholds have been subject to mandatory ex ante review by the European Commission. Significant changes to European merger control were introduced in 2004 with the aim of bringing merger control practice closer to economic principles. The reforms included the introduction of an efficiency defence, the appointment of a chief economist, the issuing of horizontal merger guidelines and, importantly, the replacement of the substantive test for prohibiting a merger. Pre-2004, a dominance test set out that the creation or strengthening of a dominant position as a necessary condition for the prohibition of a merger. This test was said to permit anticompetitive mergers leading to collective dominance and tacit collusion. The “substantial lessening of competition” test employed by the United States’ Federal Trade Commission (FTC), which addressed these concerns, was used as a template, and led to the adoption of a significant impediment of effective competition (SIEC) test.

Studies of the determinants of EU merger control consistently find that market shares, the Herfindahl-Hirschman-Index (HHI) and entry barriers matter for intervention decisions. Some studies comparing the EU and the US found that, at least initially, the EU was tougher than the US on average as regards dominance, while the US seemed to be more aggressive against coordinated interaction and non-dominance unilateral effects cases. Following the EU’s 2004 reform, EU and US practice seem to converge.  

Section 3 describes the data set, while section 4 presents the linear model and estimation results.

The data contain almost the entire population of DG Comp’s merger decisions until 2014, obtained from the publicly accessible cases published by DG Comp on the Commission’s webpage – 5,109 merger decisions. Most of the notified mergers were cleared in phase 1. The number of mergers cleared subject to remedies increased dramatically after 1996, and oscillates between 10 and 25 per year in more recent years. The number of prohibitions varies between zero and three prohibitions per year.

To allow for a more fine grained analysis, the smallest unit of observation was the particular product and geographic market combinations affected by a merger (the ‘concerned market’), and not individual mergers. A merger touches on average six geographic/product markets, but the variation is large in individual mergers, ranging between one and 245 concerned markets. While the dataset refers to 5,109 merger decisions, it contains 30,995 market level observations, one for each product/geographic market concerned. In about 58% of concerned markets, the geographic market is defined as national; in about 20%, it is considered to be EU wide; and in only 10% it is defined as a worldwide market. In about 12% of the cases, the geographic market definition is left open.

The authors then employ a linear probability model to estimate the relationship between merger characteristics and decisions to intervene by the European Commission – understood as a prohibition (or the parties abandoning the transaction during Phase 2) or the imposition of remedies. DG Comp intervened in 367 out of the 5,109 merger cases in the dataset (i.e. 7% of mergers). Neither merger characteristics (full mergers and joint ventures) nor the variables indicating alternative theories of harm (foreclosure concerns, vertical mergers, conglomerate mergers) significantly affect the Commission’s decisions. Interestingly, the size of the concerned markets (national, EU wide, worldwide) also has no effect on decisions to intervene. Instead, intervention increases with changes in market concentration, joint market shares, the existence of barriers to entry and, to a lesser extent, the possibility of foreclosure.

Section 5 presents the non-linear model used to explore correlations between merger characteristics and intervention.

Section 4 above explored the association between concentration, market shares, entry barriers, and the risk of foreclosure with the intervention decision by DG Comp parametrically. However, the correlation between these variables might differ for different types of mergers. To explore these effects, the authors employ machine-learning techniques (in particular, causal forest algorithms). The main goal of the analysis is to understand how the effect of one explanatory variable (i.e., concentration, market shares, entry barriers, and risk of foreclosure) on an outcome variable (i.e. the competitive concerns raised by DG Comp) varies with the nature of the merger – where the nature of the merger is described by all other merger and market characteristics included in the dataset.

The model finds that the relevance of high concentration for intervention appears to follow a downward trend over the years. The correlation between concentration and competition concerns is positive and mostly significant up to 2001; it seems to decrease since then and becomes insignificant after 2011. Similar results arise for market shares. While the correlation between high market shares and competition concerns is positive and significant up until 2010, market shares seem to become a less important intervention criterion since the early 2000s, and even to become insignificant as of 2011. The opposite trend can be observed as regards barriers to entry. While the correlation between barriers to entry and competitive concerns was essentially zero up to 1997, it becomes positive, significant, and of increasing importance since 1998. A similar trend can be observed for risks of foreclosure, even though the sample here is very limited, since DG Comp considered a risk of foreclosure to exist in only about 3% of the concerned markets. These developments highlight the shift away from evaluating mergers based on structural indicators towards a more economics based approach.

Comment:

I have no idea what a causal forest algorithm is, which is just one of the ways in which I cannot possibly comment on this piece from a methodological standpoint. Since the paper is not yet published, and given what I know of academic reviews in Economics, I cannot assure you that there are not some concerns on this front.

In any event, I can say that this is an illuminating piece of work regarding the main drivers of intervention in merger control. The amount of work put into creating a dataset with not only all EU merger decisions but also all product/geographic markets is staggering. For such a technical paper, it is surprisingly easy to read. My only comment is that it is not clear why the database stops in 2014, and it would be very nice if the authors could extend the analysis closer to present day.

Author Socials A weekly email with competition/antitrust updates. All opinions are mine

What do you think?

Note: Your email address will not be published