Last Updated on March 24, 2024 by Ranking
The optimal accountability framework for artificial intelligence systems remains an unresolved issue around the world. As ChatGPT and other large generative models take technology to the next level, solutions are urgently needed. The proposals on liability and the proposed EU rules on artificial intelligence are inextricably linked. Taken together, these acts may well create a “Brussels effect” in the regulation of artificial intelligence.
In recent years, there have been more and more proposals addressed directly or indirectly to artificial intelligence systems. By publishing two proposals for Artificial Intelligence Liability Directives in September 2022, the European Commission presented the final cornerstone of its approach to regulating Artificial Intelligence. See, e.g., Rishi Bommasani et al., “On the Opportunities and Threats of Basic Models” (2021)
The European Commission published two proposals on September 28, 2022. The AILD proposal aims to harmonize procedural issues, such as disclosure of evidence and burden of proof, in Member States’ national liability systems. The PLD proposal suggests a general update of classic product liability, but with a particular focus on digital products.
The article examines the extent to which this balance has been significantly achieved, suggests specific improvements and outlines paths for the future development of AI accountability in the EU and beyond. The threats associated with artificial intelligence have been analyzed in numerous scientific articles. This section merely reiterates the main risks that need to be addressed through AI regulation.
Sustainable AI and technology regulation can and should contribute to mitigating climate change. The regulation of AI liability is based on Art. 114 TFEU. Currently, tort liability for artificial intelligence systems is two-pronged. The two proposals are based on different doctrinal approaches, but they share several common goals.
Secondly, the new regime aims to promote legal certainty in an area of law struggling with the hitherto unpredictable ad hoc adaptation to artificial intelligence systems. This, in turn, should make it easier for companies to insure liability risks and stimulate AI adoption and innovation, especially for SMEs. The PLD proposal, along with several other EU initiatives, also aims to “minimize” the risks associated with digital products.
By contrast, the AILD proposal seeks only minimal harmonisation. For example, in some Member States this applies to drivers of motor vehicles, regardless of the involvement of artificial intelligence (as long as drivers can still be said to be ‘driving’). Germany, for example, has even introduced special obligations for drivers of highly automated vehicles.
An even more radical approach would mean a complete abandonment of the definition of artificial intelligence in favor of full risk regulation. Within such a framework, which can only be sketched here, separate regulations would be assigned to each specific risk. The rules on disclosure of evidence and partial relaxation of the burden of proof only apply to high-risk AI systems.
Software within the meaning of the revised PLD includes AI103 (which, after all, is simply specific software containing a more or less complex mathematical model104). Accordingly, PLD ultimately expands its substantive scope of application to generally cover non-high-risk AI systems (within the meaning of the Artificial Intelligence Act) and all types of software beyond AI.
Under EU product liability law, there are three different types of defects: manufacturing defects, design defects and instruction defects (= warning). Most AI liability cases, however, will involve design defects. The crux then lies in defining and detecting design errors in the context of artificial intelligence.
Defectiveness should generally be defined by comparison with technical standards applicable in specific industries. This would encourage innovation while providing potential claimants with clear indications of when a defect has occurred. Even AI models that perform worse on average than humans can be very socially beneficial and ultimately have no downsides.
The EU is to introduce new rules of liability for artificial intelligence systems. Under the new regulations, companies will be able to avoid finding a construction defect under three conditions. These criteria balance the burden on economic operators with incentives to improve beyond human performance and effectively protect the individuals concerned.
The refusal of a potential defendant to provide information cannot be taken into account by a court as a factor in assessing potential non-compliance. The PLD proposal includes a substantially similar mechanism for disclosing evidence in Art. 8. However, its scope is much broader than the equivalent provision of the AILD proposal.
The recipients of disclosed evidence differ fundamentally from the recipients of typical disclosures within the meaning of the Artificial Intelligence Act. Although the wording of Art. 3 of the AILD proposals also covers evidence beyond the Artificial Intelligence Act, which is likely to be the focus of court proceedings for practical reasons. The requirement to provide facts and evidence to support the validity of a compensation claim is also a weakness.
Breaches of mandatory safety requirements now clearly lead to a presumption of defectiveness, bringing the two pillars of EU product regulation even closer together. Additionally, the plaintiff will have to demonstrate a causal relationship between the defect and the damage suffered. Causation is presumed when a defect is found and the damage is usually consistent with the defect.
To avoid these additional up-front costs, applicants must demonstrate non-compliance with safety requirements or an obvious deficiency. However, both may prove difficult to prove. Obvious failure will be much easier to determine in the event of misclassification. In all other cases, especially when it comes to scoring, candidates must be prepared to use expensive artificial intelligence.
The threshold of EUR 500 applicable in the case of property damage pursuant to Art. 9 letter b) PLD. This is important because it gives economic operators the right incentive to avoid defects, even if they lead to minor damage. However, such damage can be spread across many injured people, resulting in a significant total loss.
In particular, updates and upgrades can be delivered online and are now a routine basis for software maintenance. Importantly, only critical updates (security maintenance) are required to avoid liability as a defense against subsequent defects. The PLD proposal therefore establishes a non-contractual obligation to provide security updates.
Within the scope of this article, I can only outline possible scenarios and solutions; a more detailed explanation should be left to subsequent articles. The entire approach proposed by the Commission is based on the assumption that PLD will implement a system of strict accountability. A coherent and harmonized AI liability framework is needed that addresses key risks, facilitates enforcement and prevents confusion.
Truly strict liability generally rests solely on causation, usually (and rightly) with a force majeure defense. In my opinion, the European Parliament was right to propose a really stringent liability regime for high-risk artificial intelligence systems. To properly define the scope of truly rigorous accountability, a novel, key distinction must be made between two types of models.
Therefore, the debunk option does not impose any additional burden on developers or users. By acting as a principle of information enforcement responsibility, the presumption incentivizes the party with the information advantage, i.e. creators and users. The AI system provider must disclose information about training data and overall reliability metrics.
By sticking to this regime, providers significantly increase the likelihood that their model will generalize well to cases not encountered in the field. An EU liability regime should specify that the damage eligible for compensation includes damage caused by unforeseeable acts or omissions of an AI system. The evidence disclosure system envisaged in the AILD and PLD proposal is clearly intended to directly overcome institutional opacity.
In some scenarios, sustainability and efficiency may even go hand in hand. A current trend in machine learning is to use pre-trained models that have been trained on more general data for a specific class of tasks (for example, image357 or speech recognition358).359 These are then fully trained by developers working on a specific problem with domain-specific data.
Liability for artificial intelligence should be introduced in the EU in the form of a regulation, not a directive. When it comes to design defects, PLD is defect-based, as are many national liability systems. Consumers must prove guilt in most AI-related cases under PLD as well.
25) To art. 4 section 2 and 3 of the AILD proposal, a presumption should be added that a violation of the Artificial Intelligence Act after processing by the provider or user caused damage. 26) This should not be considered a sufficient refutation of the (correct) statement that almost every machine learning model will inevitably make some errors. 27) Article 6 AILD should be aligned with the more open provisions of Art. 5 section 2 letter b) PLD Proposal to include natural persons acting in a professional capacity among the injured persons authorized to represent them.