Home » Blog » Algorithmic Collusion and Liability: Reassessing Antitrust Enforcement in Autonomous Markets

Algorithmic Collusion and Liability: Reassessing Antitrust Enforcement in Autonomous Markets

Authored By: Netti Venkata Darshika

Damodaram Sanjivayya National Law University

Abstract

The spread of autonomous pricing algorithms is quickly changing the digital markets, providing unprecedented efficiency and at the same time posing new and sinister challenges to antitrust enforcement. The paper discusses the severe gap between the conventional antitrust theory and the facts of algorithmic collusion. The traditional U.S. and E.U. Competition law, which is based on the identification of a meeting of the minds or explicit agreement, is structurally ineffective to regulate collusion that is either tacit coordination through parallel learning or completely emergent autonomous convergence on supracompetitive equilibria without human intent. 

This journal article claims that there should be a paradigm shift in the enforcement philosophy. We suggest that we should go beyond the strict emphasis on intent and agreement to an outcomes-based liability model that would hold firms liable to anti-competitive market outcomes, regardless of the subjective malice of the programmer. Moreover, we examine other models of liability, such as corporate facilitation liability and algorithmic product liability, which implies that the party that implements the pricing logic should be liable in the market structure that allows collusive equilibrium. The term explained the processes through which ostensibly independent algorithms, when optimising profits, may find non competitive market results that are highly coordinated, which are reminiscent of a traditional cartel, although do not carry the explicit smoking gun of human collusion.[1]                                                            

II. Nature of Algorithmic Collusion 

In this part I outline the range of collusive behaviours which may develop when autonomous pricing algorithms are used, and claim that the danger to competition is much greater than the customary idea of an explicit agreement. The concept of algorithmic collusion is not a single entity. It appears along a spectrum that is characterized by the level of human intent and the coordination mechanism thus fundamentally undermining the basic assumptions of antitrust law.

III. Weakness of Existing Antitrust Law 

The main argument of this book is based on the fact that the existing antitrust models were developed in the context of a traditional industrial economy, in which collusion assumes the physical presence of human beings. The autonomous and digital character of algorithmic collusion, however, undermines a number of the foundational doctrines, thus establishing a significant enforcement gap, especially in the context of the U.S. Sherman Act and the EU Article 101 TFEU.  

3.1. The Erosion of the ‘Agreement’

The key to modern cartel enforcement is the duty of an agreement or concerted practice among independent undertakings.[2] Section 1 of the Sherman Act in the United States requires a contract, combination, or conspiracy in restraint of trade. This has always been the case with Type 1 (Express Collusion), where human communication, including emails or chat logs, furnishes the necessary evidence of a meeting of the minds.  

But the legal fiction of agreement fails when it is faced with Type 2 (Tacit) and Type 3 (Emergent) collusion. This is especially problematic when algorithms converge to a stable, supracompetitive solution independently and autonomously, in which case firms implementing such algorithms can plausibly claim that their managers used independent discretion in specifying the objective function of the algorithm, and that the resulting coordination is an unintended consequence of market structure and the rational maximization of profits by the algorithm. To prove a firm guilty of collusion, it is necessary to prove that the programmer had the collusive effect in mind, rather than the overall goal of optimisation, which is virtually impossible to do without the full, un-interpreted source code and training data.  In the same way, in the European Union, under Article 101 TFEU, the term concerted practice is broader than a formal agreement, but still needs to have direct or indirect contact between undertakings aimed at influencing market behaviour, which in the case of algorithms in purely automated parallelism is limited to the exchange of observable price signals in the marketplace, which courts have traditionally found difficult to categorize as concerted action without additional, non-market-based evidence. 

3.3. Barriers to Investigation and Evidence Gathering 

Proprietary Protection: Companies strongly oppose the disclosure of source code and training data, based on trade-secret and intellectual-property protection. Without the ability to sandbox or recreate the learning environment of the algorithm, regulators cannot prove that the algorithm was either created or evolved into a collusive agent. The evidentiary burden is thus practically hampered by the legitimate commercial secrecy issues.[3]

Overall, the current legal precedent requires human intent, but the technology replaces it with autonomous learning, which provides companies with an almost unquestionable defense against the current cartel charges.  

Reevaluating Liability: Who is to Blame?

The current antitrust doctrine is insufficient when it comes to the task of capturing new types of algorithmic collusion, thus forcing a reevaluation of liability frameworks. Instead of continuing with the traditional, human-based intent requirement, an outcome-based and design-based approach provides a more appropriate analytic prism. This change requires new legal principles that would hold the corporate entity accountable to the consequences of its autonomous digital agents.

4.1. Corporate/Principal Liability  

The most practical option is to state that the algorithmic behavior is the responsibility of the corporation itself, and the software is not a separate entity but a part of the business. This position is based on the long-established principle of vicarious liability, according to which a company is responsible for the actions of its employees or agents. The algorithm is considered a sophisticated employee in the current context, which is performing the commercial policy of the firm under the direction or control of the company.  

This is useful in Type 2 collusion (Tacit/Coordination) where the programmer defines the objective function (e.g. profit maximisation) and the firm retains final control over the deployment and parameters of the algorithm. The argument is based on the fact that, through the use of a pricing agent that is known to act in a concentrated and highly transparent market, the firm enabled a non-competitive outcome. This is supported by the European Commission Horizontal Cooperation Guidelines which state that companies cannot avoid liability just because their prices were algorithmically set. 

4.2. Strict Liability and Algorithmic Product Liability 

In the most difficult case, Type 3 (Emergent) collusion, a more stringent, outcome-based model of liability is necessary. Using the analogy of the product liability law, it is possible to state that the algorithm itself is a product that is presented to the market by the firm. In case the emergent behaviour of the algorithm causes market harm, e.g. supra-competitive pricing, the company should be strictly liable to place a defective product into the market regardless of the fault or intent.  

The EU initiative to revise the Product Liability Directive and introduce an Artificial Intelligence Liability Directive (AILD) is an indication of a shift towards non-contractual, fault-based claims of damages caused by AI systems. Although these efforts are mainly focused on physical damage, the same principle, that the developer or operator is responsible for the harmful output of an autonomous system, can be easily applied to economic damage in the context of antitrust. 

4.3. Liability of Programmer/Designer 

Individual programmer liability must be limited to instances of express collusion (Type 1 ), where the programmer knowingly coded a human cartel agreement, as in the Topkins case. It is not appropriate or effective to impose liability on programmers due to Type 3 (Emergent) collusion, since the anti-competitive effect is an emergent property of a complex system and not the result of malicious or careless coding. Focusing on enforcement of individual programmers is a danger of stifling innovation without dealing with the systemic market failure that the autonomous system itself creates. In this regard, the corporate entity that is enjoying the illegal gains should be held liable.  

VI. Conclusion 

The current framework is poorly placed to demonstrate intent or to lift the veil of technical obscurity of the so-called black box.  

This journal has contended that the limited scope of prosecuting Type 1 (Express) collusion is pathetically inadequate. The future of antitrust enforcement lies in a paradigm shift in the law that acknowledges the fact of automated parallelism. This change would involve the implementation of a corporate strict-liability framework based on results and institutionalising the compulsory transparency and auditing of algorithms in major digital markets. The competition authorities can force firms to make their digital agents competitive, rather than collusive, by shifting to a facilitation-and-effect-centric mandate, which will maintain consumer welfare in the autonomous economy.  

Reference(S):

[1] Schwalbe, U. (2018). The Antitrust Treatment of Algorithms: The Next Chapter in Competition Law History? Journal of Competition Law and Economics, 14(4), 517-539.  

[2] Treaty on the Functioning of the European Union (TFEU) 101(1).  

[3] OECD, Algorithms and Collusion: Competition Policy in the Digital Age (OECD Publishing 2017) 6.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top