AI & SocietyPub Date : 2024-06-10DOI: 10.1007/s00146-024-01983-3
Monalisa Bhattacherjee, Sweta Sinha
{"title":"As you sow, so shall you reap: rethinking humanity in the age of artificial intelligence","authors":"Monalisa Bhattacherjee, Sweta Sinha","doi":"10.1007/s00146-024-01983-3","DOIUrl":"10.1007/s00146-024-01983-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1541 - 1542"},"PeriodicalIF":2.9,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141361722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-06-08DOI: 10.1007/s00146-024-01987-z
Pascal D. Koenig
{"title":"Attitudes toward artificial intelligence: combining three theoretical perspectives on technology acceptance","authors":"Pascal D. Koenig","doi":"10.1007/s00146-024-01987-z","DOIUrl":"10.1007/s00146-024-01987-z","url":null,"abstract":"<div><p>Evidence on AI acceptance comes from a diverse field comprising public opinion research and largely experimental studies from various disciplines. Differing theoretical approaches in this research, however, imply heterogeneous ways of studying AI acceptance. The present paper provides a framework for systematizing different uses. It identifies three families of theoretical perspectives informing research on AI acceptance—user acceptance, delegation acceptance, and societal adoption acceptance. These models differ in scope, each has elements specific to them, and the connotation of technology acceptance thus changes when shifting perspective. The discussion points to a need for combining the three perspectives as they have all become relevant for AI. A combined approach serves to systematically relate findings from different studies. And as AI systems affect people in different constellations and no single perspective can accommodate them all, building blocks from several perspectives are needed to comprehensively study how AI is perceived in society.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1333 - 1345"},"PeriodicalIF":2.9,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01987-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141370709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-06-06DOI: 10.1007/s00146-024-01953-9
Dakota Root
{"title":"Reconfiguring the alterity relation: the role of communication in interactions with social robots and chatbots","authors":"Dakota Root","doi":"10.1007/s00146-024-01953-9","DOIUrl":"10.1007/s00146-024-01953-9","url":null,"abstract":"<div><p>Don Ihde’s <i>alterity relation</i> focuses on the quasi-otherness of dynamic technologies that interact with humans. The alterity relation is one means to study relations between humans and artificial intelligence (AI) systems . However, research on alterity relations has not defined the difference between playing with a toy, using a computer, and interacting with a social robot or chatbot. We suggest that Ihde’s quasi-other concept fails to account for the interactivity, autonomy, and adaptability of social robots and chatbots, which more closely approach human alterity. In this article, we will examine experiences with a chatbot, Replika, and a humanoid robot, a RealDoll, to show how some users experience AI systems as companions<i>.</i> First, we show that the perception of social robots and chatbots as intimate companions is grounded in communication. Advances in natural language processing (NLP) and natural language generation (NLG) allow a relationship to form between some users and social robots and chatbots. In this relationship, some users experience social robots and chatbots as more than quasi-others. We will use Kanemitsu’s another-other concept to analyze cases where social robots and chatbots should be distinguished from quasi-others.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1321 - 1332"},"PeriodicalIF":2.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01953-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141378055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-06-04DOI: 10.1007/s00146-024-01919-x
Jakub Mlynář, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel, Saul Albert
{"title":"AI in situated action: a scoping review of ethnomethodological and conversation analytic studies","authors":"Jakub Mlynář, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel, Saul Albert","doi":"10.1007/s00146-024-01919-x","DOIUrl":"10.1007/s00146-024-01919-x","url":null,"abstract":"<div><p>Despite its elusiveness as a concept, ‘artificial intelligence’ (AI) is becoming part of everyday life, and a range of empirical and methodological approaches to social studies of AI now span many disciplines. This article reviews the scope of ethnomethodological and conversation analytic (EM/CA) approaches that treat AI as a phenomenon emerging in and through the situated organization of social interaction. Although this approach has been very influential in the field of computational technology since the 1980s, AI has only recently emerged as such a pervasive part of daily life to warrant a sustained empirical focus in EM/CA. Reviewing over 50 peer-reviewed publications, we find that the studies focus on various social and group activities such as task-oriented situations, semi-experimental setups, play, and everyday interactions. They also involve a range of participant categories including children, older participants, and people with disabilities. Most of the reviewed studies apply CA’s conceptual apparatus, its approach to data analysis, and core topics such as turn-taking and repair. We find that across this corpus, studies center on three key themes: openings and closing the interaction, miscommunication, and non-verbal aspects of interaction. In the discussion, we reflect on EM studies that differ from those in our corpus by focusing on praxeological respecifications of AI-related phenomena. Concurrently, we offer a critical reflection on the work of literature reviewing, and explore the tortuous relationship between EM and CA in the area of research on AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1497 - 1527"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01919-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-06-04DOI: 10.1007/s00146-024-01976-2
Mark Ryan
{"title":"We’re only human after all: a critique of human-centred AI","authors":"Mark Ryan","doi":"10.1007/s00146-024-01976-2","DOIUrl":"10.1007/s00146-024-01976-2","url":null,"abstract":"<div><p>The use of a ‘human-centred’ artificial intelligence approach (HCAI) has substantially increased over the past few years in academic texts (1600 +); institutions (27 Universities have HCAI labs, such as Stanford, Sydney, Berkeley, and Chicago); in tech companies (e.g., Microsoft, IBM, and Google); in politics (e.g., G7, G20, UN, EU, and EC); and major institutional bodies (e.g., World Bank, World Economic Forum, UNESCO, and OECD). Intuitively, it sounds very appealing: placing human concerns at the centre of AI development and use. However, this paper will use insights from the works of Michel Foucault (mostly <i>The Order of Things</i>) to argue that the HCAI approach is deeply problematic in its assumptions. In particular, this paper will criticise four main assumptions commonly found within HCAI: human–AI hybridisation is desirable and unproblematic; humans are not currently at the centre of the AI universe; we should use humans as a way to guide AI development; AI is the next step in a continuous path of human progress; and increasing human control over AI will reduce harmful bias. This paper will contribute to the field of philosophy of technology by using Foucault's analysis to examine assumptions found in HCAI [it provides a Foucauldian conceptual analysis of a current approach (human-centredness) that aims to influence the design and development of a transformative technology (AI)], it will contribute to AI ethics debates by offering a critique of human-centredness in AI (by choosing Foucault, it provides a bridge between older ideas with contemporary issues), and it will also contribute to Foucault studies (by using his work to engage in contemporary debates, such as AI).</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1303 - 1319"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01976-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141266672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-30DOI: 10.1007/s00146-024-01965-5
K. Woods
{"title":"If AI is our co-pilot, who is the captain?","authors":"K. Woods","doi":"10.1007/s00146-024-01965-5","DOIUrl":"10.1007/s00146-024-01965-5","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1537 - 1538"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-30DOI: 10.1007/s00146-024-01963-7
Satinder P. Gill
{"title":"Ethics and administration of the ‘Res publica’: dynamics of democracy","authors":"Satinder P. Gill","doi":"10.1007/s00146-024-01963-7","DOIUrl":"10.1007/s00146-024-01963-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 3","pages":"825 - 827"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142415160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-28DOI: 10.1007/s00146-024-01926-y
Nathaniel Sharadin
{"title":"Morality first?","authors":"Nathaniel Sharadin","doi":"10.1007/s00146-024-01926-y","DOIUrl":"10.1007/s00146-024-01926-y","url":null,"abstract":"<div><p>The Morality First strategy for developing AI systems that can represent and respond to human values aims to <i>first</i> develop systems that can represent and respond to <i>moral</i> values. I argue that Morality First and other X-First views are unmotivated. Moreover, if one particular philosophical view about value is true, these strategies are positively distorting. The natural alternative according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1289 - 1301"},"PeriodicalIF":2.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01926-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-22DOI: 10.1007/s00146-024-01972-6
S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig, Matthias Uhl
{"title":"Perceived responsibility in AI-supported medicine","authors":"S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig, Matthias Uhl","doi":"10.1007/s00146-024-01972-6","DOIUrl":"10.1007/s00146-024-01972-6","url":null,"abstract":"<div><p>In a representative vignette study in Germany with 1,653 respondents, we investigated laypeople’s attribution of moral responsibility in collaborative medical diagnosis. Specifically, we compare people’s judgments in a setting in which physicians are supported by an AI-based recommender system to a setting in which they are supported by a human colleague. It turns out that people tend to attribute moral responsibility to the artificial agent, although this is traditionally considered a category mistake in normative ethics. This tendency is stronger when people believe that AI may become conscious at some point. In consequence, less responsibility is attributed to human agents in settings with hybrid diagnostic teams than in settings with human-only diagnostic teams. Our findings may have implications for behavior exhibited in contexts of collaborative medical decision making with AI-based as opposed to human recommenders because less responsibility is attributed to agents who have the mental capacity to care about outcomes.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1485 - 1495"},"PeriodicalIF":2.9,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01972-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}