AI & SocietyPub Date : 2024-06-04DOI: 10.1007/s00146-024-01919-x
Jakub Mlynář, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel, Saul Albert
{"title":"AI in situated action: a scoping review of ethnomethodological and conversation analytic studies","authors":"Jakub Mlynář, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel, Saul Albert","doi":"10.1007/s00146-024-01919-x","DOIUrl":"10.1007/s00146-024-01919-x","url":null,"abstract":"<div><p>Despite its elusiveness as a concept, ‘artificial intelligence’ (AI) is becoming part of everyday life, and a range of empirical and methodological approaches to social studies of AI now span many disciplines. This article reviews the scope of ethnomethodological and conversation analytic (EM/CA) approaches that treat AI as a phenomenon emerging in and through the situated organization of social interaction. Although this approach has been very influential in the field of computational technology since the 1980s, AI has only recently emerged as such a pervasive part of daily life to warrant a sustained empirical focus in EM/CA. Reviewing over 50 peer-reviewed publications, we find that the studies focus on various social and group activities such as task-oriented situations, semi-experimental setups, play, and everyday interactions. They also involve a range of participant categories including children, older participants, and people with disabilities. Most of the reviewed studies apply CA’s conceptual apparatus, its approach to data analysis, and core topics such as turn-taking and repair. We find that across this corpus, studies center on three key themes: openings and closing the interaction, miscommunication, and non-verbal aspects of interaction. In the discussion, we reflect on EM studies that differ from those in our corpus by focusing on praxeological respecifications of AI-related phenomena. Concurrently, we offer a critical reflection on the work of literature reviewing, and explore the tortuous relationship between EM and CA in the area of research on AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1497 - 1527"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01919-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-06-04DOI: 10.1007/s00146-024-01976-2
Mark Ryan
{"title":"We’re only human after all: a critique of human-centred AI","authors":"Mark Ryan","doi":"10.1007/s00146-024-01976-2","DOIUrl":"10.1007/s00146-024-01976-2","url":null,"abstract":"<div><p>The use of a ‘human-centred’ artificial intelligence approach (HCAI) has substantially increased over the past few years in academic texts (1600 +); institutions (27 Universities have HCAI labs, such as Stanford, Sydney, Berkeley, and Chicago); in tech companies (e.g., Microsoft, IBM, and Google); in politics (e.g., G7, G20, UN, EU, and EC); and major institutional bodies (e.g., World Bank, World Economic Forum, UNESCO, and OECD). Intuitively, it sounds very appealing: placing human concerns at the centre of AI development and use. However, this paper will use insights from the works of Michel Foucault (mostly <i>The Order of Things</i>) to argue that the HCAI approach is deeply problematic in its assumptions. In particular, this paper will criticise four main assumptions commonly found within HCAI: human–AI hybridisation is desirable and unproblematic; humans are not currently at the centre of the AI universe; we should use humans as a way to guide AI development; AI is the next step in a continuous path of human progress; and increasing human control over AI will reduce harmful bias. This paper will contribute to the field of philosophy of technology by using Foucault's analysis to examine assumptions found in HCAI [it provides a Foucauldian conceptual analysis of a current approach (human-centredness) that aims to influence the design and development of a transformative technology (AI)], it will contribute to AI ethics debates by offering a critique of human-centredness in AI (by choosing Foucault, it provides a bridge between older ideas with contemporary issues), and it will also contribute to Foucault studies (by using his work to engage in contemporary debates, such as AI).</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1303 - 1319"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01976-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141266672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-30DOI: 10.1007/s00146-024-01965-5
K. Woods
{"title":"If AI is our co-pilot, who is the captain?","authors":"K. Woods","doi":"10.1007/s00146-024-01965-5","DOIUrl":"10.1007/s00146-024-01965-5","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1537 - 1538"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-30DOI: 10.1007/s00146-024-01963-7
Satinder P. Gill
{"title":"Ethics and administration of the ‘Res publica’: dynamics of democracy","authors":"Satinder P. Gill","doi":"10.1007/s00146-024-01963-7","DOIUrl":"10.1007/s00146-024-01963-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 3","pages":"825 - 827"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142415160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-28DOI: 10.1007/s00146-024-01926-y
Nathaniel Sharadin
{"title":"Morality first?","authors":"Nathaniel Sharadin","doi":"10.1007/s00146-024-01926-y","DOIUrl":"10.1007/s00146-024-01926-y","url":null,"abstract":"<div><p>The Morality First strategy for developing AI systems that can represent and respond to human values aims to <i>first</i> develop systems that can represent and respond to <i>moral</i> values. I argue that Morality First and other X-First views are unmotivated. Moreover, if one particular philosophical view about value is true, these strategies are positively distorting. The natural alternative according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1289 - 1301"},"PeriodicalIF":2.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01926-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-22DOI: 10.1007/s00146-024-01972-6
S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig, Matthias Uhl
{"title":"Perceived responsibility in AI-supported medicine","authors":"S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig, Matthias Uhl","doi":"10.1007/s00146-024-01972-6","DOIUrl":"10.1007/s00146-024-01972-6","url":null,"abstract":"<div><p>In a representative vignette study in Germany with 1,653 respondents, we investigated laypeople’s attribution of moral responsibility in collaborative medical diagnosis. Specifically, we compare people’s judgments in a setting in which physicians are supported by an AI-based recommender system to a setting in which they are supported by a human colleague. It turns out that people tend to attribute moral responsibility to the artificial agent, although this is traditionally considered a category mistake in normative ethics. This tendency is stronger when people believe that AI may become conscious at some point. In consequence, less responsibility is attributed to human agents in settings with hybrid diagnostic teams than in settings with human-only diagnostic teams. Our findings may have implications for behavior exhibited in contexts of collaborative medical decision making with AI-based as opposed to human recommenders because less responsibility is attributed to agents who have the mental capacity to care about outcomes.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1485 - 1495"},"PeriodicalIF":2.9,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01972-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-21DOI: 10.1007/s00146-024-01962-8
Tim Hinks
{"title":"Navigating technological shifts: worker perspectives on AI and emerging technologies impacting well-being","authors":"Tim Hinks","doi":"10.1007/s00146-024-01962-8","DOIUrl":"10.1007/s00146-024-01962-8","url":null,"abstract":"<div><p>This paper asks whether workers’ experience of working with new technologies and workers’ perceived threats of new technologies are associated with expected well-being. Using survey data for 25 OECD countries we find that both experiences of new technologies and threats of new technologies are associated with more concern about expected well-being. Controlling for the negative experiences of COVID-19 on workers and their macroeconomic outlook both mitigate these findings, but workers with negative experiences of working alongside and with new technologies still report lower expected well-being.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1277 - 1287"},"PeriodicalIF":2.9,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01962-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141114717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-18DOI: 10.1007/s00146-024-01954-8
Harry Collins
{"title":"Why artificial intelligence needs sociology of knowledge: parts I and II","authors":"Harry Collins","doi":"10.1007/s00146-024-01954-8","DOIUrl":"10.1007/s00146-024-01954-8","url":null,"abstract":"<div><p>Recent developments in artificial intelligence based on neural nets—deep learning and large language models which together I refer to as NEWAI—have resulted in startling improvements in language handling and the potential to keep up with changing human knowledge by learning from the internet. Nevertheless, examples such as ChatGPT, which is a ‘large language model’, have proved to have no moral compass: they answer queries with fabrications with the same fluency as they provide facts. I try to explain why this is, basing the argument on the sociology of knowledge, particularly social studies of science, notably ‘studies of expertise and experience’ and the ‘fractal model’ of society. Learning from the internet is not the same as socialisation: NEWAI has no primary socialisation such as provides the foundations of human moral understanding. Instead, large language models are retrospectively socialised by human intervention in an attempt to align them with societally accepted ethics. Perhaps, as technology advances, large language models could come to understand speech and recognise objects sufficiently well to acquire the equivalent of primary socialisation. In the meantime, we must be vigilant about who is socialising them and be aware of the danger of their socialising us to align with them rather than vice-versa, an eventuality that would lead to the further erosion of the distinction between the true and the false giving further support to populism and fascism.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1249 - 1263"},"PeriodicalIF":2.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01954-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141125798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-05-18DOI: 10.1007/s00146-024-01971-7
Marc Heimann, Anne-Friederike Hübener
{"title":"The extimate core of understanding: absolute metaphors, psychosis and large language models","authors":"Marc Heimann, Anne-Friederike Hübener","doi":"10.1007/s00146-024-01971-7","DOIUrl":"10.1007/s00146-024-01971-7","url":null,"abstract":"<div><p>This paper delves into the striking parallels between the linguistic patterns of Large Language Models (LLMs) and the concepts of psychosis in Lacanian psychoanalysis. Lacanian theory, with its focus on the formal and logical underpinnings of psychosis, provides a compelling lens to juxtapose human cognition and AI mechanisms. LLMs, such as GPT-4, appear to replicate the intricate metaphorical and metonymical frameworks inherent in human language. Although grounded in mathematical logic and probabilistic analysis, the outputs of LLMs echo the nuanced linguistic associations found in metaphor and metonymy, suggesting a mirroring of human linguistic structures. A pivotal point in this discourse is the exploration of “absolute metaphors”—core gaps in reasoning discernible in both AI models and human thought processes and central to the Lacanian conceptualization of psychosis. Despite the traditional divide between AI research and continental philosophy, this analysis embarks on an innovative journey, utilizing Lacanian philosophy to unravel the logic of AI, using concepts established in the continental discourse on logic, rather than the analytical tradition.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1265 - 1276"},"PeriodicalIF":2.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01971-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141125579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}