AI and how to use it: Information duties for ai products and service
DOI:
https://doi.org/10.13135/2785-7867/13463Abstract
The transparency and explainability of AI, along with the difficulties users face in understanding its functioning, risks, and potential benefits, constitute one of the main challenges to the use and deployment of new automated systems. The vulnerability of both consumers and professional users is increasing, along with the increase of the imbalance of power. Public oversight must also be accompanied by regulations that allow users to understand where, how, and with what kind and levels of risks it is possible to use an AI product. Information duties are a core component of the AI regulatory framework. However, they are currently dispersed across multiple legislative instruments, resulting in a fragmented and unclear regulatory landscape that requires careful analysis and clarification. This contribution examines the information duties applicable to AI products and services, with the aim of mapping the relevant legislative framework and highlighting emerging lines of interpretation. Starting from the analysis of the Digital Content Directive, limited to the relevant profiles, the current framework for consumer protection is analysed. Secondly, by examining the latest contribution of the Product Liability Directive, the state of the art of consumer and user protection for products and services with artificial intelligence is completed, to assess their readiness for new products and services. Finally, the AI Act is examined, to underline further information duties, both directly applicable and from a de iure condendo perspective, as well as best practices for operators. The combined interpretation of the relevant legislative acts supports an extensive interpretation of information duties and AI literacy, aimed at enhancing the level and quality of information available to consumers.


The Journal of Law, Market & Innovation is indexed in 
The Journal of Law, Market & Innovation is indexed in