AI & Society最新文献

筛选
英文 中文
Morality first? 道德吗?
IF 2.9
AI & Society Pub Date : 2024-05-28 DOI: 10.1007/s00146-024-01926-y
Nathaniel Sharadin
{"title":"Morality first?","authors":"Nathaniel Sharadin","doi":"10.1007/s00146-024-01926-y","DOIUrl":"10.1007/s00146-024-01926-y","url":null,"abstract":"<div><p>The Morality First strategy for developing AI systems that can represent and respond to human values aims to <i>first</i> develop systems that can represent and respond to <i>moral</i> values. I argue that Morality First and other X-First views are unmotivated. Moreover, if one particular philosophical view about value is true, these strategies are positively distorting. The natural alternative according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1289 - 1301"},"PeriodicalIF":2.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01926-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceived responsibility in AI-supported medicine 人工智能辅助医疗的责任意识
IF 2.9
AI & Society Pub Date : 2024-05-22 DOI: 10.1007/s00146-024-01972-6
S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig, Matthias Uhl
{"title":"Perceived responsibility in AI-supported medicine","authors":"S. Krügel,&nbsp;J. Ammeling,&nbsp;M. Aubreville,&nbsp;A. Fritz,&nbsp;A. Kießig,&nbsp;Matthias Uhl","doi":"10.1007/s00146-024-01972-6","DOIUrl":"10.1007/s00146-024-01972-6","url":null,"abstract":"<div><p>In a representative vignette study in Germany with 1,653 respondents, we investigated laypeople’s attribution of moral responsibility in collaborative medical diagnosis. Specifically, we compare people’s judgments in a setting in which physicians are supported by an AI-based recommender system to a setting in which they are supported by a human colleague. It turns out that people tend to attribute moral responsibility to the artificial agent, although this is traditionally considered a category mistake in normative ethics. This tendency is stronger when people believe that AI may become conscious at some point. In consequence, less responsibility is attributed to human agents in settings with hybrid diagnostic teams than in settings with human-only diagnostic teams. Our findings may have implications for behavior exhibited in contexts of collaborative medical decision making with AI-based as opposed to human recommenders because less responsibility is attributed to agents who have the mental capacity to care about outcomes.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1485 - 1495"},"PeriodicalIF":2.9,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01972-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating technological shifts: worker perspectives on AI and emerging technologies impacting well-being 驾驭技术变革:工人对影响福祉的人工智能和新兴技术的看法
IF 2.9
AI & Society Pub Date : 2024-05-21 DOI: 10.1007/s00146-024-01962-8
Tim Hinks
{"title":"Navigating technological shifts: worker perspectives on AI and emerging technologies impacting well-being","authors":"Tim Hinks","doi":"10.1007/s00146-024-01962-8","DOIUrl":"10.1007/s00146-024-01962-8","url":null,"abstract":"<div><p>This paper asks whether workers’ experience of working with new technologies and workers’ perceived threats of new technologies are associated with expected well-being. Using survey data for 25 OECD countries we find that both experiences of new technologies and threats of new technologies are associated with more concern about expected well-being. Controlling for the negative experiences of COVID-19 on workers and their macroeconomic outlook both mitigate these findings, but workers with negative experiences of working alongside and with new technologies still report lower expected well-being.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1277 - 1287"},"PeriodicalIF":2.9,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01962-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141114717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why artificial intelligence needs sociology of knowledge: parts I and II 人工智能为何需要知识社会学:第一和第二部分
IF 2.9
AI & Society Pub Date : 2024-05-18 DOI: 10.1007/s00146-024-01954-8
Harry Collins
{"title":"Why artificial intelligence needs sociology of knowledge: parts I and II","authors":"Harry Collins","doi":"10.1007/s00146-024-01954-8","DOIUrl":"10.1007/s00146-024-01954-8","url":null,"abstract":"<div><p>Recent developments in artificial intelligence based on neural nets—deep learning and large language models which together I refer to as NEWAI—have resulted in startling improvements in language handling and the potential to keep up with changing human knowledge by learning from the internet. Nevertheless, examples such as ChatGPT, which is a ‘large language model’, have proved to have no moral compass: they answer queries with fabrications with the same fluency as they provide facts. I try to explain why this is, basing the argument on the sociology of knowledge, particularly social studies of science, notably ‘studies of expertise and experience’ and the ‘fractal model’ of society. Learning from the internet is not the same as socialisation: NEWAI has no primary socialisation such as provides the foundations of human moral understanding. Instead, large language models are retrospectively socialised by human intervention in an attempt to align them with societally accepted ethics. Perhaps, as technology advances, large language models could come to understand speech and recognise objects sufficiently well to acquire the equivalent of primary socialisation. In the meantime, we must be vigilant about who is socialising them and be aware of the danger of their socialising us to align with them rather than vice-versa, an eventuality that would lead to the further erosion of the distinction between the true and the false giving further support to populism and fascism.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1249 - 1263"},"PeriodicalIF":2.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01954-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141125798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The extimate core of understanding: absolute metaphors, psychosis and large language models 理解的外在核心:绝对隐喻、精神病和大型语言模型
IF 2.9
AI & Society Pub Date : 2024-05-18 DOI: 10.1007/s00146-024-01971-7
Marc Heimann, Anne-Friederike Hübener
{"title":"The extimate core of understanding: absolute metaphors, psychosis and large language models","authors":"Marc Heimann,&nbsp;Anne-Friederike Hübener","doi":"10.1007/s00146-024-01971-7","DOIUrl":"10.1007/s00146-024-01971-7","url":null,"abstract":"<div><p>This paper delves into the striking parallels between the linguistic patterns of Large Language Models (LLMs) and the concepts of psychosis in Lacanian psychoanalysis. Lacanian theory, with its focus on the formal and logical underpinnings of psychosis, provides a compelling lens to juxtapose human cognition and AI mechanisms. LLMs, such as GPT-4, appear to replicate the intricate metaphorical and metonymical frameworks inherent in human language. Although grounded in mathematical logic and probabilistic analysis, the outputs of LLMs echo the nuanced linguistic associations found in metaphor and metonymy, suggesting a mirroring of human linguistic structures. A pivotal point in this discourse is the exploration of “absolute metaphors”—core gaps in reasoning discernible in both AI models and human thought processes and central to the Lacanian conceptualization of psychosis. Despite the traditional divide between AI research and continental philosophy, this analysis embarks on an innovative journey, utilizing Lacanian philosophy to unravel the logic of AI, using concepts established in the continental discourse on logic, rather than the analytical tradition.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1265 - 1276"},"PeriodicalIF":2.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01971-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141125579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI, automation and the lightening of work 人工智能、自动化和工作轻松化
IF 2.9
AI & Society Pub Date : 2024-05-16 DOI: 10.1007/s00146-024-01959-3
David A. Spencer
{"title":"AI, automation and the lightening of work","authors":"David A. Spencer","doi":"10.1007/s00146-024-01959-3","DOIUrl":"10.1007/s00146-024-01959-3","url":null,"abstract":"<div><p>Artificial intelligence (AI) technology poses possible threats to existing jobs. These threats extend not just to the number of jobs available but also to their quality. In the future, so some predict, workers could face fewer and potentially worse jobs, at least if society does not embrace reforms that manage the coming AI revolution. This paper uses the example of Daron Acemoglu and Simon Johnson’s recent book—<i>Power and Progress</i> (2023)—to illustrate some of the dilemmas and options for managing the future of work under AI. Acemoglu and Johnson, while warning of the potential negative effects of an AI-driven automation, argue that AI can be used for positive ends. In particular, they argue for its uses in creating more ‘good jobs’. This outcome will depend on democratising AI technology. This paper is critical of the approach taken by Acemoglu and Johnson—specifically, it misses the possibility for using AI to lighten work (i.e., to reduce its duration and improve its quality). This paper stresses the potential benefits of automation as a mechanism for lightening work. Its key arguments aim to advance critical debates focused on creating a future in which AI works for people not just for profits.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1237 - 1247"},"PeriodicalIF":2.9,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01959-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140969140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence 采用更广泛的方法来保证人工智能:解决人工智能发展中的“隐藏”危害
IF 2.9
AI & Society Pub Date : 2024-05-16 DOI: 10.1007/s00146-024-01950-y
Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo, Luciano Floridi
{"title":"The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence","authors":"Christopher Thomas,&nbsp;Huw Roberts,&nbsp;Jakob Mökander,&nbsp;Andreas Tsamados,&nbsp;Mariarosaria Taddeo,&nbsp;Luciano Floridi","doi":"10.1007/s00146-024-01950-y","DOIUrl":"10.1007/s00146-024-01950-y","url":null,"abstract":"<div><p>Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as resource extraction and processing, exploitative labour practices and energy intensive model training. Thus, the scope of current AI assurance practice is insufficient for ensuring that AI is ethical in a holistic sense, i.e. in ways that are legally permissible, socially acceptable, economically viable and environmentally sustainable. This article addresses this shortcoming by arguing for a broader approach to AI assurance that is sensitive to the full scope of AI development and deployment harms. To do so, the article maps harms related to AI and highlights three examples of harmful practices that occur upstream in the AI supply chain and impact the environment, labour, and data exploitation. It then reviews assurance mechanisms used in adjacent industries to mitigate similar harms, evaluating their strengths, weaknesses, and how effectively they are being applied to AI. Finally, it provides recommendations as to how a broader approach to AI assurance can be implemented to mitigate harms more effectively across the whole AI supply chain.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1469 - 1484"},"PeriodicalIF":2.9,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01950-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The unseen dilemma of AI in mental healthcare 人工智能在精神医疗领域的隐形困境
IF 2.9
AI & Society Pub Date : 2024-05-13 DOI: 10.1007/s00146-024-01937-9
Akhil P. Joseph, Anithamol Babu
{"title":"The unseen dilemma of AI in mental healthcare","authors":"Akhil P. Joseph,&nbsp;Anithamol Babu","doi":"10.1007/s00146-024-01937-9","DOIUrl":"10.1007/s00146-024-01937-9","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1533 - 1535"},"PeriodicalIF":2.9,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The age of the algorithmic society a Girardian analysis of mimesis, rivalry, and identity in the age of artificial intelligence 算法社会时代--吉拉德对人工智能时代的模仿、竞争和身份认同的分析
IF 2.9
AI & Society Pub Date : 2024-05-11 DOI: 10.1007/s00146-024-01915-1
Lucas Freund
{"title":"The age of the algorithmic society a Girardian analysis of mimesis, rivalry, and identity in the age of artificial intelligence","authors":"Lucas Freund","doi":"10.1007/s00146-024-01915-1","DOIUrl":"10.1007/s00146-024-01915-1","url":null,"abstract":"<div><p>This paper explores the intersection of René Girard's mimetic theory and the algorithmic society, particularly in the context of the potential advent of Artificial General Intelligence (AGI). Girard's theory, which elucidates the dynamics of desire, rivalry, scapegoating, and the sacrificial crisis, provides a unique lens through which to examine the complexities of our relationship with AI and its role in the creation of the sacred. As individuals increasingly rely on AI recommendations, the distinction between personal choice and algorithmic manipulation becomes less clear, raising concerns about the authenticity of cultural expressions and the role of algorithms in shaping cultural narratives. The triangular structure of desire, with AI as the model and individuals as the imitators, underscores the power of algorithms in this process. The sacrificial crisis, a key concept in Girard's theory, becomes a critical point of reflection in the algorithmic society. The exposure of the scapegoating mechanism reveals the destructive potential of algorithmic manipulation and calls for new forms of understanding, empathy, and non-violent solutions. This paper argues that recognizing the sacrificial crisis can prompt individuals and society to critically examine the impact of AI's influence, challenge the narratives it perpetuates, and reclaim agency in the face of algorithmic dominance. This paper further discusses the potential implications of the emergence of AGI, which could intensify the influence of algorithms on the creation of the sacred due to its advanced cognitive capabilities and deep understanding of human desires and behaviors. This could fuel a rapid evolution of the mimetic ecosystem, with profound implications for personal freedom, independent decision-making, and the formation and preservation of individual identity. This paper concludes by emphasizing the need for responsible algorithmic practices and ethical considerations to ensure that the creation of the sacred serves the common good in the algorithmic society.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1227 - 1236"},"PeriodicalIF":2.9,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140989130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine theology or artificial sainthood! 机器神学或人造圣人
IF 2.9
AI & Society Pub Date : 2024-05-10 DOI: 10.1007/s00146-024-01964-6
Karamjit S. Gill
{"title":"Machine theology or artificial sainthood!","authors":"Karamjit S. Gill","doi":"10.1007/s00146-024-01964-6","DOIUrl":"10.1007/s00146-024-01964-6","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 3","pages":"829 - 831"},"PeriodicalIF":2.9,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140990590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信