Concentration measures play a central role in merger analysis. Existing guidelines identify various presumptions – both safe harbours and presumptions of anticompetitive effects – based on the level of the post-merger Herfindahl index and of the change that the merger induces in that index. These presumptions have a significant impact on agency decisions, especially in screening mergers for further review. However, the basis for these screens, in both form and level, remains unclear.
The authors of this paper, available here, show that there is both a theoretical and an empirical basis for focusing solely on changes in the Herfindahl index, and ignoring its level, in screening mergers for whether their unilateral effects will harm consumers. The authors also argue that the levels at which the presumptions currently are set may allow mergers to proceed that cause consumer harm.
Section 2 reviews concentration screens in various versions of the US Horizontal Merger Guidelines.
The first version of the Merger Guidelines – issued solely by the Department of Justice – appeared in 1968. Their focus was solely on preventing increases in concentration. They proposed concentration thresholds largely dependent on the shares of the two merging firms that were markedly more stringent than those today. As a rule, horizontal mergers between two companies with 5% market shares would be blocked. The DOJ’s 1982 Guidelines represented a marked change, introducing the Herfindahl index (HHI), giving much more importance to market concentration levels and adopting much more lenient standards. For example, a merger between two firms with 5% market share, which would lead to a 50-point increase in the HHI, became presumptively legal instead of being challenged. More specifically, mergers in “unconcentrated” markets with a post-merger HHI below 1000 became unlikely to be challenged. The 1992 Horizontal Merger Guidelines, issued for the first time jointly by the DOJ and FTC, maintained these presumptions. Most recently, the 2010 revision of the Horizontal Merger Guidelines further relaxed these standards, raising the safe harbour level of the HHI from 1000 to 1500, the threshold for considering a market highly concentrated from 1800 to 2500, and the critical levels of ΔHHI in highly concentrated markets from 50 to 100 for the safe harbour, and from 100 to 200 for the presumption of harm.
While the theoretical and empirical bases for the 1968 Guidelines nor the 1982 changes were never clearly laid out by the agencies, the reason for the change in 2010 was made explicit: the aim was to enhance transparency by making the thresholds conform more closely to actual agency practice. In other words, no explicit economic rationale was ever offered for these thresholds.
Section 3 looks at the relationship between equilibrium concentration measures and the effect of a merger on consumer welfare.
Analysis of horizontal mergers focuses on weighing the risk of anticompetitive reductions in competition against the prospect for merger-related efficiencies. Concentration screens for mergers must therefore aim to capture, based on firms’ market shares, the likely balance of these two effects for the “typical” merger. Since, absent any efficiency gains, a horizontal merger will generally (weakly) increase prices, any merger screen aimed at preventing consumer harm that would allow some mergers and block others must implicitly be relying on some presumption of the efficiency gain that, on average, should be credited to a typical merger.
Running three canonical models of competition – the Cournot model of output/capacity competition in homogeneous good industries, multinomial logit and constant elasticity of substitution models of differentiated product price competition – the authors find that this critical level of merger-induced efficiencies depend on the merging firms’ market shares, but not on the market shares of non-merging firms. In fact, for mergers between symmetric firms in the Cournot model, the required synergy depends solely on the (naively-computed) change in the Herfindahl index, and not at all on its post-merger level.
The authors also examine how the levels of required synergies depend on the merging firms’ market shares. In the Cournot model, with synergies of 3% and common levels of market demand elasticity, consumer harm occurs when the merging firms’ shares are much like those in the 1968 Guidelines’ thresholds, i.e. c. 5% market share. In contrast, the threshold levels of merger-induced change in the Herfindahl index are more lenient, but still restrictive, in the multinomial logit and constant elasticity of substitution models of price competition. In other words, even mergers among small firms would require substantial synergies for the mergers not to harm consumers.
Section 4 investigates how mergers’ effects on consumers relate to concentration measures in one industry – brewing.
This section looks, empirically, at how the synergies required to prevent consumer harm are related to the level and merger-induced change in the Herfindahl index (both naively computed) for various hypothetical mergers in the U.S. brewing industry. The authors compute, for various hypothetical (local) mergers, the efficiency improvement that would be required to prevent consumer harm. The results show that, as in the models of Section 3, the required efficiency gain is strongly related to the (naively computed) change in the Herfindahl index, and not very related to the level of the post-merger Herfindahl. In effect, the levels of the merger-induced change in the Herfindahl necessary to prevent consumer harm in these markets generally fall in the range of those we derive in the theoretical models of Section 3. The levels required indicate that, if the typical merger in these markets would result in a 3% efficiency gain, then many of these hypothetical mergers falling into the current safe harbour, and in particular those with post-merger Herfindahl levels below 1500, would likely harm consumers. Further, for such an efficiency gain, the results indicate that mergers whose post-merger Herfindahl levels are between 1500-2500 and that change the Herfindahl by more than 200 would often harm consumers.
Section 5 discusses these results and how they reflect on current US Horizontal Guidelines.
The theoretical and empirical results above indicate that, when screening mergers for whether their unilateral effects will harm consumers, the merger-induced change in the (naively computed) Herfindahl index should play a much more prominent role in merger screening than the index’s level. One possibility, of course, is that the prominent role ascribed to the level of concentration revealed by the HHI does not reflect concerns over unilateral effects, but rather over coordinated effects, the likelihood of entry and/or repositioning, or other factors. Focusing on unilateral effects, another possibility is that current horizontal merger screens reflect not so much an aim to prevent consumer harm, but rather to prevent significant consumer harm. A related possibility is that current practice reflects the need to protect consumers given a limited enforcement budget. In that situation, the agencies would want to focus on the worst mergers for consumers.
This is an interesting paper on the role of market shares – and variations in market shares – in screening horizontal mergers. I have no idea how solid the results are, but the argument sounds important. I was nonetheless disappointed by the absence of any discussion on possible measures to address the problems identified by the authors – or of why the screens have become increasingly lenient over time.
At a more fundamental level, the authors seem to assume that 3% synergies are a valid presumption of efficiencies for horizontal mergers. That may well be the case, but I was left wondering where this number comes from. After all, if the argument for allowing horizontal mergers were that they typically lead to efficiencies, one would have thought that this would be an area to which significant study has been devoted – but, as far as I can tell, that is not the case. The implication would seem to be that our tools are not as effective or grounded on evidence as we would like to think – which, I suppose, is part of the authors’ argument in this paper.