{"title":"Living well with AI: Virtue, education, and artificial intelligence","authors":"Nicholas Smith, Darby Vickers","doi":"10.1177/14778785241231561","DOIUrl":null,"url":null,"abstract":"Artificial intelligence technologies have become a ubiquitous part of human life. This prompts us to ask, ‘how should we live well with artificial intelligence?’ Currently, the most prominent candidate answers to this question are principlist. According to these approaches, if you teach people some finite set of principles or convince them to adopt the right rules, people will be able to live and act well with artificial intelligence, even in an evolving and opaque moral world. We find the dominant principlist approaches to be ill-suited to providing forward-looking moral guidance regarding living well with artificial intelligence. We analyze some of the proposed principles to show that they oscillate between being too vague and too specific. We also argue that such rules are unlikely to be flexible enough to adapt to rapidly changing circumstances. By contrast, we argue for an Aristotelian virtue ethics approach to artificial intelligence ethics. Aristotelian virtue ethics provides a concrete and actionable guidance that is also flexible; thus, it is uniquely well placed to deal with the forward-looking and rapidly changing landscape of life with artificial intelligence. However, virtue ethics is agent-based rather than action-based. Using virtue ethics as a basis for living well with artificial intelligence requires ensuring that at least some virtuous agents also possess the relevant scientific and technical expertise. Since virtue ethics does not prescribe a set of rules, it requires exemplars who can serve as a model for those learning to be virtuous. Cultivating virtue is challenging, especially in the absence of moral sages. Despite this difficulty, we think the best option is to attempt what virtue ethics requires, even though no system of training can guarantee the production of virtuous agents. We end with two alternative visions – one from each of the two authors – about the practicality of such an approach.","PeriodicalId":46679,"journal":{"name":"Theory and Research in Education","volume":"4 1","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theory and Research in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/14778785241231561","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence technologies have become a ubiquitous part of human life. This prompts us to ask, ‘how should we live well with artificial intelligence?’ Currently, the most prominent candidate answers to this question are principlist. According to these approaches, if you teach people some finite set of principles or convince them to adopt the right rules, people will be able to live and act well with artificial intelligence, even in an evolving and opaque moral world. We find the dominant principlist approaches to be ill-suited to providing forward-looking moral guidance regarding living well with artificial intelligence. We analyze some of the proposed principles to show that they oscillate between being too vague and too specific. We also argue that such rules are unlikely to be flexible enough to adapt to rapidly changing circumstances. By contrast, we argue for an Aristotelian virtue ethics approach to artificial intelligence ethics. Aristotelian virtue ethics provides a concrete and actionable guidance that is also flexible; thus, it is uniquely well placed to deal with the forward-looking and rapidly changing landscape of life with artificial intelligence. However, virtue ethics is agent-based rather than action-based. Using virtue ethics as a basis for living well with artificial intelligence requires ensuring that at least some virtuous agents also possess the relevant scientific and technical expertise. Since virtue ethics does not prescribe a set of rules, it requires exemplars who can serve as a model for those learning to be virtuous. Cultivating virtue is challenging, especially in the absence of moral sages. Despite this difficulty, we think the best option is to attempt what virtue ethics requires, even though no system of training can guarantee the production of virtuous agents. We end with two alternative visions – one from each of the two authors – about the practicality of such an approach.
期刊介绍:
Theory and Research in Education, formerly known as The School Field, is an international peer reviewed journal that publishes theoretical, empirical and conjectural papers contributing to the development of educational theory, policy and practice.