This paper, available here, combines machine-learning techniques with statistical screens computed from the distribution of bids in tenders within the Swiss construction sector to predict collusion through bid-rigging cartels, leading to a correct classification of 84% of bidding processes as collusive or non-collusive out of 584 tenders. The paper is structured follows:

Section 2 reviews the data set.

The data includes four bid-rigging cartels, comprising 584 tenders in the Swiss road construction sector with 3,799 bids for a market volume of roughly 370 million Swiss francs. For each cartel, the dataset allows one to observe collusive cartel periods as well as competitive post-cartel periods. In doing so, the dataset is ideal for detecting bid-rigging cartels, in that it reflects periods of undisputed collusion and periods of undisputed competition.

Collusive agreements were comparable across the four bid-rigging cartels. In all four cartels, the procurement procedure was based on a first-price sealed bid auction. The cartels required two steps: first, to designate the winner of the tender, and, secondly, to determine the price of the designated winning bid.

Section 2 also reviews a number of cartel screens.

A screen is a statistical tool to verify whether collusion likely exists in a particular market and to flag unlawful behaviour through economic and statistical analysis. Using a broader definition, screens comprise all methods designed to detect markets, industries, or firms where there is increased likelihood of collusion.

The literature typically distinguishes between behavioural and structural screens. Behavioural screens aim to detect abnormal firm behaviour whereas structural screens investigate the characteristics of entire markets that may favour collusion. Behavioural screens are divided into complex and simple methods. Simple screens analyse strategic variables such as prices and market shares to determine whether firms depart from competitive behaviour. While there are many applications of simple screens to various markets, their application to bid-rigging cases is rather rare. Complex methods, more commonly deployed for bid rigging, generally use econometric tools or structural estimation models to detect suspicious outcomes.

This paper proposes the application of simple screens combined with machine learning to detect bid-rigging cartels. It considers several statistical screens constructed from the distribution of bids in each tender to distinguish between competition and collusion. Because each screen captures a different aspect of the distribution of bids, the combined use of different screens potentially allows one to identify different types of bid manipulation.

Section 3 presents machine-learning techniques along with the empirical results.

This section discusses machine-learning methods to train and test models for predicting bid-rigging cartels based on the screens presented in the previous section. Specifically, two approaches are considered: lasso logit regressions and ensemble methods. Using a lasso regression, the authors create a training sample containing 75% of the total of observations in the dataset, which will be used for estimating the model’s parameters. The outstanding 25% of observations provide a test sample that will be used to evaluate how the logit regression works. The ensemble method uses the same data set structure, but instead of relying on a legit regression, it makes estimates on the basis of a weighted average of three machine-learning algorithms: bagged classification trees, random forests, and neural networks. The first two algorithms depend on tree methods, i.e. recursively splitting the data into subsamples in a way that minimises the sum of squared differences of actual incidences of collusion from the collusion probabilities within the subsamples. Both methods estimate the trees in a large number of samples repeatedly drawn from the original data, and obtain predictions of collusion by averaging over the tree (or splitting) structure across samples. However, one difference is that bagging considers all explanatory variables as candidates for further data splitting at each step, while random forests only use a random subset of the total of variables to prevent correlation of trees across samples. Finally, neural networks aim at fitting a system of functions that flexibly and accurately models the influence of the explanatory variables on collusion. Cross-validation in the training sample determines the optimal weight each of the three machine-learning algorithm obtains in the ensemble method.

The lasso regression method exhibits a correct classification rate of 84%, which is almost identical to those of the ensemble method. The overall share of incorrect predictions therefore amounts to 16% for either method. When separately considering cartel and competitive periods, lasso performs slightly better for correctly classifying collusive tenders (86%) than the ensemble method (83%). The latter, however, is moderately superior for classifying competitive tenders (85%) than lasso regression (82%). Put differently, the results indicate that lasso regression produces 18% of false positive and 14% of false negative predictions, whereas the ensemble method produces 15% of false positive and 17% of false negative predictions.

Section 4 discusses several policy recommendations.

The paper demonstrates the usefulness of simple screens combined with machine learning. This method has several advantages. Firstly, data requirements are comparably low, which allows for a best use of resources. Secondly, this method allows for the identification of potential cartels without requiring firm-level data, and hence risking compromising the secrecy under which competition agencies must try to detect collusion. Thirdly, the combination of screening and machine learning allows one to use past cartel data better to detect collusion in future data. Because the detection method based on simple screens is inductive, it will need to be verified in the empirical context at hand. As competition agencies typically have access to data from former cases, they could easily apply the suggested screens to check their appropriateness in different industries or countries. Moreover, if the data is large enough, an agency might directly estimate its own predictive model based on screening and machine learning to identify suspicious tenders. The trained predictive models obtained by machine learning could then be applied to newly collected data in order to screen tenders for bid rigging ex ante.

Once suspicious tenders have been identified, there appear two possible options concerning next steps. The first consists of immediately opening an investigation: if the detection method classifies a large share of tenders as collusive, competition agencies might want to initiate a deeper investigation immediately. A second option would be to substantiate the initial suspicion before opening a formal investigation. This could be done in a number of ways. First, the firms participating in the suspicious tenders could be more closely examined to see whether there is a specific group logic. Second, geographical analysis may help to identify bid-rigging cartels operating in particular regions. Third, a competition authority can deploy a bid rotation screen. If one is able to determine a specific group of firms that regularly participate in suspicious tenders (e.g. in some region) and find that contract placement in the potential bid-rigging cartel operates in a rotational scheme, this will provide support for the initial suspicion.

From a policy perspective, incorrectly classifying cases of non-collusion as collusion (false positives) and unnecessarily starting an investigation might be relatively more harmful than incorrectly classifying cases of collusion as non-collusive (false negatives), and thus not detecting a subset of bid-rigging cartels. This risk can be minimised by adjusting the probability factor applicable to the screen, but this comes at the cost of reducing the likelihood of detecting actual cartels. Competition agencies therefore need to appropriately trade off the likelihood of false positive and false negative results to derive an optimal rule concerning the probability threshold that should be deployed in the screen.


This paper is well beyond my ability to comment – not only am I not an economist, I am also only superficially acquainted with machine learning algorithms. However, even from the depths of my ignorance I believe that I could have benefited from a more detailed discussion / description of the various screening techniques deployed in the paper. In any event, I really enjoy how the authors draw practical recommendations from the very theoretical work they pursue. As such, I hope that this paper is read – and, if useful, adopted – by competition enforcers.

Author Socials A weekly email with competition/antitrust updates. All opinions are mine

What do you think?

Note: Your email address will not be published