Etica funzionale
Considerazioni filosofiche sulla teoria dell’agire morale artificiale
Abstract
The purpose of Machine Ethics is to develop autonomous technologies that are able to manage not just the technical aspects of a tasks, but also the ethical ones. As a consequence, the notion of Artificial Moral Agent (AMA) has become a fundamental element of the discussion. Its meaning, however, remains rather unclear. Depending on the author or the context, the same expression stands for essentially different concepts. This casts a suspicious light on the philosophical significance of Machine Ethics. In particular, the risk arises of discarding Machine Ethics as a whole on the basis of accusations that, however, apply exclusively to one specific understanding of what AMAs are – but not to other, more adequate and convincing conceptualisations. To avoid this pitfall, this essay tries to elaborate a philosophically sound interpretation of AMAs and to sketch its primary component, i.e., the notion of functional ethics.
Downloads
Riferimenti bibliografici
C. Allen et al., Why Machine Ethics?, in Machine Ethics, a cura di S. L. Anderson e M. Anderson, New York, Cambridge University Press, 2011, pp. 51–61.
H. Arendt, La banalità del male. Eichmann a Gerusalemme, Milano, Feltrinelli, 2013.
Aristotele, Etica Nicomachea, trad. it. C. Natali, Roma-Bari, Laterza, 2005.
Aristotele, Politica, trad. it. R. Laurenti, Roma-Bari, Laterza, 2007.
R. Arkin, Lethal Autonomous Systems and the Plight of the Non-combatant, in “AISB Quarterly”, CXXXVII (2013), pp. 1–9.
R. Bodei, Dominio e sottomissione. Schiavi, animali, macchine, Intelligenza Artificiale, Bologna, Il Mulino, 2019.
J. J. Bryson, P. P. Kime, Just an artifact: Why machines are perceived as moral agents, in IJCAI International Joint Conference on Artificial Intelligence, 2011, pp. 1641–1646.
G. Chaslot, The toxic potential of YouTube’s feedback loop, https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/
R. Cingolani, G. Metta, Umani e umanoidi. Vivere con i robot, Bologna, Il Mulino, 2015.
M. Coeckelbergh, The moral standing of machines: Towards a relational and non-Cartesian moral hermeneutics, in “Philosophy and Technology”, XXVII (2014), n. 1, pp. 61–77.
J. Danaher, The Ethics of Algorithmic Outsourcing in Everyday Life, in Algorithmic Regulation, a cura di K. Yeung e M. Lodge, Oxford, Oxford University Press, 2019, pp. 98–118.
A. Datta et al., Discrimination in Online Advertising. A Multidisciplinary Inquiry, in “Proceedings of Machine Learning Research”, LXXXI (2018), n. 3, pp. 1–15.
C. Eatherly, G. Anders, Burning Conscience, New York, Monthly Review Press, 1962.
A. Fabris, Ethics of Information and Communication Technologies, Cham, Springer International Publishing, 2019.
F. Ficuciello et al., Autonomy in surgical robots and its meaningful human control, in “Paladyn, Journal of Behavioral Robotics”, X (2019), n. 1, pp. 30–43.
L. Floridi, Information ethics: On the philosophical foundation of computer ethics, in “Ethics and Information Technology”, I (1999), pp. 37–56.
L. Floridi, On the Morality of Artificial Agents, in Machine Ethics, a cura di S. L. Anderson e M. Anderson, New York, Cambridge University Press, 2011, pp. 184–212.
L. Floridi, Singularitarians, AItheists, and Why the Problem with Artificial Intelligence is H.A.L. (Humanity At Large), not HAL, in “APA Newsletter Philosophy and Computing”, XIV (2015), n. 2, pp. 8–11.
F. Fossa, Artificial Moral Agents: Moral Mentors or Sensible Tools?, in “Ethics and Information Technology”, XX (2018), n. 2, pp. 115-126.
H.-G. Gadamer, Verità e metodo, trad. it. G. Vattimo, Milano, Bompiani, 2014.
M. Guarini, P. Bello, Robotic Warfare: Some Challenges in Moving from Noncivilian to Civilian Theaters, in Robot Ethics. The Ethical and Social Implications of Robotics, a cura di P. Lin, K. Abney e G.A. Bekey, MIT Press, Cambridge-London 2012, pp. 129–144.
D. J. Gunkel, The Machine Question, Cambridge, MIT Press, 2012.
D. J. Gunkel, Robot rights, Cambridge, MIT Press, 2018.
High Level Expert Group on AI, Draft Ethics Guidelines for Trustworthy AI, 2018, https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai
IEEE, Ethically Aligned Design, https://ethicsinaction.ieee.org
H. Jonas, Organismo e libertà. Verso una biologia filosofica, a cura di P. Becchi, Torino, Einaudi, 1999.
H. Jonas, Il principio responsabilità. Un’etica per la civiltà tecnologica, a cura di P.P. Portinaro, Torino, Einaudi, 2009.
I. Kant, Fondazione della metafisica dei costumi, trad. it. F. Gonnelli, Roma-Bari, Laterza, 2007.
I. Kant, Scritti di storia, politica e diritto, a cura di F. Gonnelli, Roma-Bari, Laterza, 2007.
I. Kant, Critica del Giudizio, a cura di L. Amoroso, Milano, BUR, 2007.
I. Kant, La religione entro i limiti della sola ragione, trad. it. A. Poggi, Roma-Bari, Laterza, 2007.
I. Kant, Critica della ragion pratica, trad. it. F. Capra, Roma-Bari, Laterza, 2010.
P. Kroes, P.-P. Verbeek (a cura di), The Moral Status of Technical Artefacts, Dordrecht, Springer Science+Business Media, 2014.
B. Latour, Where are the missing masses? The sociology of a few mundane artifacts, in Shaping Technology-Building Society. Studies in Sociotechnical Change, a cura di W. Bijker e J. Law, Cambridge, MIT Press, 1992, pp. 151–180.
B. Latour, Politiche della natura. Per una democrazia delle scienze, a cura di M. Gregorio, Milano, Raffaello Cortina Editore, 2000.
V. Marchis, Storia delle macchine. Tre millenni di cultura tecnologica, Roma-Bari, Laterza, 2005.
A. Matthias, The responsibility gap: Ascribing responsibility for the actions of learning automata, in “Ethics and Information Technology”, VI (2004), n. 3, pp. 175–183.
A. Mayor, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology, Princeton-Woodstock, Princeton University Press, 2018.
J. H. Moor, The Nature, Importance, and Difficulty of Machine Ethics, in Machine Ethics, a cura di S. L. Anderson e M. Anderson, New York, Cambridge University Press, 2011, pp. 13–20.
E. L. Neely, Machines and the Moral Community, in “Philosophy & Technology”, XXVII (2014), n. 1, pp. 97–111.
Platone, Fedro, trad. it. P. Pucci, Roma-Bari, Laterza, 2010.
F. Santoni de Sio, J. van den Hoven, Meaningful human control over autonomous systems: A philosophical account, in “Frontiers Robotics AI”, V (2018), pp. 1–14.
A. Sharkey, Autonomous weapons systems, killer robots and human dignity, in “Ethics and Information Technology”, XXI (2019), n. 2, pp. 75–87.
P. Singer, Liberazione animale, a cura di P. Cavalieri, Milano, Saggiatore, 2015.
J. P. Sullins, RoboWarfare: can robots be more ethical than humans on the battlefield?, in “Ethics and Information Technology”, XII (2010), n. 2, pp. 263–275.
J. P. Sullins, When is a Robot a Moral Agent?, in Machine Ethics, a cura di S. L. Anderson e M. Anderson, New York, Cambridge University Press, 2011, pp. 151–161.
S. Vallor, Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character, in “Philosophy and Technology”, XXVIII (2015), n. 1, pp. 107–124.
A. van Wynsberghe, S. Robbins, Critiquing the Reasons for Making Artificial Moral Agents, in “Science and Engineering Ethics”, XXV (2019), n. 6, pp. 719–735.
M. Verdicchio, An Analysis of Machine Ethics from the Perspective of Autonomy, in Philosophy and Computing, a cura di T. M. Powers, Cham, Springer International Publishing, 2017, pp. 179–191.
W. Wallach, C. Allen, Moral Machines. Teaching robots right from wrong, Oxford, Oxford University Press, 2009.
N. Wiener, Dio & Golem s.p.a.. Cibernetica e religione, Torino, Bollati Boringhieri, 1991.
N. Wiener, Introduzione alla Cibernetica. L'uso umano degli esseri umani, Torino, Bollati Boringhieri, 2012.
N. Wiener, Cybernetics or, Control and Communication in the Animal and the Machine, Mansfield Centre, Martino Publishing, 2013.
Filosofia applica una licenza Creative Commons Attribution 4.0 International License a tutto il materiale pubblicato.