Reasonable AI and Other Creatures. What Role for AI Standards in Liability Litigation?
DOI:
https://doi.org/10.13135/2785-7867/7166Abstract
Standards play a vital role in supporting policies and legislation of the European Union. The regulation of artificial intelligence (AI) makes no exception as made clear by the AI Act proposal. Particularly, Articles 40 and 41 defer to harmonised standards and common specifications the concrete definition of safety and trustworthiness requirements, including risk management, data quality, transparency, human oversight, accuracy, robustness, and cybersecurity. Besides, other types of standards and professional norms are also relevant to the governance of AI. These include European non-harmonised standards, international and national standards, professional codes and guidelines, and uncodified best practices. This contribution casts light on the relationship between standards and private law in the context of liability litigation for damage caused by AI systems. Despite literature’s commitment to the issue of liability for AI, the role of standardisation in this regard has been largely overlooked hitherto. Furthermore, while much research has been undertaken on the regulation of AI, comparatively little has dealt with its standardisation. This paper aims to fill this gap. Building on previous scholarship, the contribution demonstrates that standards and professional norms are substantially normative in spite of their private and voluntary nature. In fact, they shape private relationships due to normative and economic reasons. Indeed, these private norms enter the courtrooms by explicit or implicit incorporation into contracts as well as by informing general clauses such as reasonableness and duty of care. Therefore, they represent the yardstick against which professionals’ performance and conduct are evaluated. Hence, a link between standards, safety, and liability can be established. Against this backdrop, the role of AI standards in private law is assessed. To set the scene, the article provides a bird’s-eye view of AI standardisation. The European AI standardisation initiative is analysed along with other institutional and non-institutional instruments. Finally, it is argued that AI standards contribute to defining the duty of care expected from developers and professional operators of AI systems. Hence, they might represent a valuable instrument for tackling the challenges posed by AI technology to extracontractual and contractual liability.