AI & Society最新文献

筛选
英文 中文
AI in situated action: a scoping review of ethnomethodological and conversation analytic studies 情境行动中的人工智能:人种方法学和会话分析研究范围综述
IF 2.9
AI & Society Pub Date : 2024-06-04 DOI: 10.1007/s00146-024-01919-x
Jakub Mlynář, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel, Saul Albert
{"title":"AI in situated action: a scoping review of ethnomethodological and conversation analytic studies","authors":"Jakub Mlynář,&nbsp;Lynn de Rijk,&nbsp;Andreas Liesenfeld,&nbsp;Wyke Stommel,&nbsp;Saul Albert","doi":"10.1007/s00146-024-01919-x","DOIUrl":"10.1007/s00146-024-01919-x","url":null,"abstract":"<div><p>Despite its elusiveness as a concept, ‘artificial intelligence’ (AI) is becoming part of everyday life, and a range of empirical and methodological approaches to social studies of AI now span many disciplines. This article reviews the scope of ethnomethodological and conversation analytic (EM/CA) approaches that treat AI as a phenomenon emerging in and through the situated organization of social interaction. Although this approach has been very influential in the field of computational technology since the 1980s, AI has only recently emerged as such a pervasive part of daily life to warrant a sustained empirical focus in EM/CA. Reviewing over 50 peer-reviewed publications, we find that the studies focus on various social and group activities such as task-oriented situations, semi-experimental setups, play, and everyday interactions. They also involve a range of participant categories including children, older participants, and people with disabilities. Most of the reviewed studies apply CA’s conceptual apparatus, its approach to data analysis, and core topics such as turn-taking and repair. We find that across this corpus, studies center on three key themes: openings and closing the interaction, miscommunication, and non-verbal aspects of interaction. In the discussion, we reflect on EM studies that differ from those in our corpus by focusing on praxeological respecifications of AI-related phenomena. Concurrently, we offer a critical reflection on the work of literature reviewing, and explore the tortuous relationship between EM and CA in the area of research on AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1497 - 1527"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01919-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hunters not beggars 猎人不是乞丐
IF 2.9
AI & Society Pub Date : 2024-06-04 DOI: 10.1007/s00146-024-01978-0
Mark Ressler
{"title":"Hunters not beggars","authors":"Mark Ressler","doi":"10.1007/s00146-024-01978-0","DOIUrl":"10.1007/s00146-024-01978-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1539 - 1540"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We’re only human after all: a critique of human-centred AI 我们终究只是人类:对以人为本的人工智能的批判
IF 2.9
AI & Society Pub Date : 2024-06-04 DOI: 10.1007/s00146-024-01976-2
Mark Ryan
{"title":"We’re only human after all: a critique of human-centred AI","authors":"Mark Ryan","doi":"10.1007/s00146-024-01976-2","DOIUrl":"10.1007/s00146-024-01976-2","url":null,"abstract":"<div><p>The use of a ‘human-centred’ artificial intelligence approach (HCAI) has substantially increased over the past few years in academic texts (1600 +); institutions (27 Universities have HCAI labs, such as Stanford, Sydney, Berkeley, and Chicago); in tech companies (e.g., Microsoft, IBM, and Google); in politics (e.g., G7, G20, UN, EU, and EC); and major institutional bodies (e.g., World Bank, World Economic Forum, UNESCO, and OECD). Intuitively, it sounds very appealing: placing human concerns at the centre of AI development and use. However, this paper will use insights from the works of Michel Foucault (mostly <i>The Order of Things</i>) to argue that the HCAI approach is deeply problematic in its assumptions. In particular, this paper will criticise four main assumptions commonly found within HCAI: human–AI hybridisation is desirable and unproblematic; humans are not currently at the centre of the AI universe; we should use humans as a way to guide AI development; AI is the next step in a continuous path of human progress; and increasing human control over AI will reduce harmful bias. This paper will contribute to the field of philosophy of technology by using Foucault's analysis to examine assumptions found in HCAI [it provides a Foucauldian conceptual analysis of a current approach (human-centredness) that aims to influence the design and development of a transformative technology (AI)], it will contribute to AI ethics debates by offering a critique of human-centredness in AI (by choosing Foucault, it provides a bridge between older ideas with contemporary issues), and it will also contribute to Foucault studies (by using his work to engage in contemporary debates, such as AI).</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1303 - 1319"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01976-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141266672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
If AI is our co-pilot, who is the captain? 如果人工智能是我们的副驾驶,那么谁是机长?
IF 2.9
AI & Society Pub Date : 2024-05-30 DOI: 10.1007/s00146-024-01965-5
K. Woods
{"title":"If AI is our co-pilot, who is the captain?","authors":"K. Woods","doi":"10.1007/s00146-024-01965-5","DOIUrl":"10.1007/s00146-024-01965-5","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1537 - 1538"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethics and administration of the ‘Res publica’: dynamics of democracy 公共资源 "的伦理与管理:民主的动力
IF 2.9
AI & Society Pub Date : 2024-05-30 DOI: 10.1007/s00146-024-01963-7
Satinder P. Gill
{"title":"Ethics and administration of the ‘Res publica’: dynamics of democracy","authors":"Satinder P. Gill","doi":"10.1007/s00146-024-01963-7","DOIUrl":"10.1007/s00146-024-01963-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 3","pages":"825 - 827"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142415160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morality first? 道德吗?
IF 2.9
AI & Society Pub Date : 2024-05-28 DOI: 10.1007/s00146-024-01926-y
Nathaniel Sharadin
{"title":"Morality first?","authors":"Nathaniel Sharadin","doi":"10.1007/s00146-024-01926-y","DOIUrl":"10.1007/s00146-024-01926-y","url":null,"abstract":"<div><p>The Morality First strategy for developing AI systems that can represent and respond to human values aims to <i>first</i> develop systems that can represent and respond to <i>moral</i> values. I argue that Morality First and other X-First views are unmotivated. Moreover, if one particular philosophical view about value is true, these strategies are positively distorting. The natural alternative according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1289 - 1301"},"PeriodicalIF":2.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01926-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceived responsibility in AI-supported medicine 人工智能辅助医疗的责任意识
IF 2.9
AI & Society Pub Date : 2024-05-22 DOI: 10.1007/s00146-024-01972-6
S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig, Matthias Uhl
{"title":"Perceived responsibility in AI-supported medicine","authors":"S. Krügel,&nbsp;J. Ammeling,&nbsp;M. Aubreville,&nbsp;A. Fritz,&nbsp;A. Kießig,&nbsp;Matthias Uhl","doi":"10.1007/s00146-024-01972-6","DOIUrl":"10.1007/s00146-024-01972-6","url":null,"abstract":"<div><p>In a representative vignette study in Germany with 1,653 respondents, we investigated laypeople’s attribution of moral responsibility in collaborative medical diagnosis. Specifically, we compare people’s judgments in a setting in which physicians are supported by an AI-based recommender system to a setting in which they are supported by a human colleague. It turns out that people tend to attribute moral responsibility to the artificial agent, although this is traditionally considered a category mistake in normative ethics. This tendency is stronger when people believe that AI may become conscious at some point. In consequence, less responsibility is attributed to human agents in settings with hybrid diagnostic teams than in settings with human-only diagnostic teams. Our findings may have implications for behavior exhibited in contexts of collaborative medical decision making with AI-based as opposed to human recommenders because less responsibility is attributed to agents who have the mental capacity to care about outcomes.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1485 - 1495"},"PeriodicalIF":2.9,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01972-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating technological shifts: worker perspectives on AI and emerging technologies impacting well-being 驾驭技术变革:工人对影响福祉的人工智能和新兴技术的看法
IF 2.9
AI & Society Pub Date : 2024-05-21 DOI: 10.1007/s00146-024-01962-8
Tim Hinks
{"title":"Navigating technological shifts: worker perspectives on AI and emerging technologies impacting well-being","authors":"Tim Hinks","doi":"10.1007/s00146-024-01962-8","DOIUrl":"10.1007/s00146-024-01962-8","url":null,"abstract":"<div><p>This paper asks whether workers’ experience of working with new technologies and workers’ perceived threats of new technologies are associated with expected well-being. Using survey data for 25 OECD countries we find that both experiences of new technologies and threats of new technologies are associated with more concern about expected well-being. Controlling for the negative experiences of COVID-19 on workers and their macroeconomic outlook both mitigate these findings, but workers with negative experiences of working alongside and with new technologies still report lower expected well-being.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1277 - 1287"},"PeriodicalIF":2.9,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01962-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141114717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why artificial intelligence needs sociology of knowledge: parts I and II 人工智能为何需要知识社会学:第一和第二部分
IF 2.9
AI & Society Pub Date : 2024-05-18 DOI: 10.1007/s00146-024-01954-8
Harry Collins
{"title":"Why artificial intelligence needs sociology of knowledge: parts I and II","authors":"Harry Collins","doi":"10.1007/s00146-024-01954-8","DOIUrl":"10.1007/s00146-024-01954-8","url":null,"abstract":"<div><p>Recent developments in artificial intelligence based on neural nets—deep learning and large language models which together I refer to as NEWAI—have resulted in startling improvements in language handling and the potential to keep up with changing human knowledge by learning from the internet. Nevertheless, examples such as ChatGPT, which is a ‘large language model’, have proved to have no moral compass: they answer queries with fabrications with the same fluency as they provide facts. I try to explain why this is, basing the argument on the sociology of knowledge, particularly social studies of science, notably ‘studies of expertise and experience’ and the ‘fractal model’ of society. Learning from the internet is not the same as socialisation: NEWAI has no primary socialisation such as provides the foundations of human moral understanding. Instead, large language models are retrospectively socialised by human intervention in an attempt to align them with societally accepted ethics. Perhaps, as technology advances, large language models could come to understand speech and recognise objects sufficiently well to acquire the equivalent of primary socialisation. In the meantime, we must be vigilant about who is socialising them and be aware of the danger of their socialising us to align with them rather than vice-versa, an eventuality that would lead to the further erosion of the distinction between the true and the false giving further support to populism and fascism.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1249 - 1263"},"PeriodicalIF":2.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01954-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141125798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The extimate core of understanding: absolute metaphors, psychosis and large language models 理解的外在核心:绝对隐喻、精神病和大型语言模型
IF 2.9
AI & Society Pub Date : 2024-05-18 DOI: 10.1007/s00146-024-01971-7
Marc Heimann, Anne-Friederike Hübener
{"title":"The extimate core of understanding: absolute metaphors, psychosis and large language models","authors":"Marc Heimann,&nbsp;Anne-Friederike Hübener","doi":"10.1007/s00146-024-01971-7","DOIUrl":"10.1007/s00146-024-01971-7","url":null,"abstract":"<div><p>This paper delves into the striking parallels between the linguistic patterns of Large Language Models (LLMs) and the concepts of psychosis in Lacanian psychoanalysis. Lacanian theory, with its focus on the formal and logical underpinnings of psychosis, provides a compelling lens to juxtapose human cognition and AI mechanisms. LLMs, such as GPT-4, appear to replicate the intricate metaphorical and metonymical frameworks inherent in human language. Although grounded in mathematical logic and probabilistic analysis, the outputs of LLMs echo the nuanced linguistic associations found in metaphor and metonymy, suggesting a mirroring of human linguistic structures. A pivotal point in this discourse is the exploration of “absolute metaphors”—core gaps in reasoning discernible in both AI models and human thought processes and central to the Lacanian conceptualization of psychosis. Despite the traditional divide between AI research and continental philosophy, this analysis embarks on an innovative journey, utilizing Lacanian philosophy to unravel the logic of AI, using concepts established in the continental discourse on logic, rather than the analytical tradition.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1265 - 1276"},"PeriodicalIF":2.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01971-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141125579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信