AI & Society最新文献

筛选
英文 中文
Age of Disruption 颠覆时代
IF 2.9
AI & Society Pub Date : 2025-04-23 DOI: 10.1007/s00146-025-02365-z
Jan Soeffner
{"title":"Age of Disruption","authors":"Jan Soeffner","doi":"10.1007/s00146-025-02365-z","DOIUrl":"10.1007/s00146-025-02365-z","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2011 - 2014"},"PeriodicalIF":2.9,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02365-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The end AI innocence: genie is out of the bottle 结局AI天真:精灵已出瓶
IF 2.9
AI & Society Pub Date : 2025-03-01 DOI: 10.1007/s00146-025-02267-0
Karamjit S. Gill
{"title":"The end AI innocence: genie is out of the bottle","authors":"Karamjit S. Gill","doi":"10.1007/s00146-025-02267-0","DOIUrl":"10.1007/s00146-025-02267-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"257 - 261"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Doing agency: how agents adapt in wide systems 做代理:代理如何适应广泛的系统
IF 2.9
AI & Society Pub Date : 2025-01-06 DOI: 10.1007/s00146-024-02176-8
Stephen Cowley
{"title":"Doing agency: how agents adapt in wide systems","authors":"Stephen Cowley","doi":"10.1007/s00146-024-02176-8","DOIUrl":"10.1007/s00146-024-02176-8","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 1","pages":"1 - 3"},"PeriodicalIF":2.9,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Considerations for trustworthy cross-border interoperability of digital identity systems in developing countries. 发展中国家数字身份系统可信赖跨境互操作性的考虑。
IF 2.9
AI & Society Pub Date : 2025-01-01 Epub Date: 2024-08-07 DOI: 10.1007/s00146-024-02008-9
Ayei Ibor, Mark Hooper, Carsten Maple, Jon Crowcroft, Gregory Epiphaniou
{"title":"Considerations for trustworthy cross-border interoperability of digital identity systems in developing countries.","authors":"Ayei Ibor, Mark Hooper, Carsten Maple, Jon Crowcroft, Gregory Epiphaniou","doi":"10.1007/s00146-024-02008-9","DOIUrl":"10.1007/s00146-024-02008-9","url":null,"abstract":"<p><p>In developing nations, the implementation of Foundational Identity Systems (FIDS) has optimised service delivery and inclusive economic growth. Cross-border e-government will gain traction as developing countries increasingly look to identity federation and trustworthy interoperability through FIDS for the identification and authentication of identity holders. Despite this potential, the interoperability of FIDS in the African identity ecosystem has not been well-studied. Among the difficulties in this situation are the intricate internal political dynamics that have led to weak institutions, suggesting that FIDS could be used for political purposes; additionally, citizens' or identity holders' habitual low trust in the government raises concerns about data security and privacy protection. Similarly, vendor lock-in, cross-system compatibility, and ambiguous legislative rules for data exchange are other concerns. Interoperability is fundamentally necessary as a precondition for e-government services and serves as the foundation for the best possible service delivery in the areas of social security, education, and finance, as well as gender equality as demonstrated by the European Union (EU). Moreover, the integration of cross-border FIDS and an ecosystem of effective data governance will be created by unified data sharing via an interoperable identity system. Thus, in this study, we point to the challenges, opportunities, and requirements for cross-border interoperability in an African setting. Furthermore, we investigated current interoperability solutions such as the EU's eIDAS and Estonian X-Road and proposed an approach for scoping requirements to achieve a fully functional interoperable identity ecosystem in the African setting. Our findings show that interoperability in the African identity ecosystem is essential for expanding the scope of e-government throughout the continent and for bolstering the smooth authentication and verification of identity holders for inclusive economic growth.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2729-2750"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144095483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We need better images of AI and better conversations about AI. 我们需要更好的人工智能图像和更好的关于人工智能的对话。
IF 2.9
AI & Society Pub Date : 2025-01-01 Epub Date: 2024-10-29 DOI: 10.1007/s00146-024-02101-z
Marc Steen, Tjerk Timan, Jurriaan Van Diggelen, Steven Vethman
{"title":"We need better images of AI and better conversations about AI.","authors":"Marc Steen, Tjerk Timan, Jurriaan Van Diggelen, Steven Vethman","doi":"10.1007/s00146-024-02101-z","DOIUrl":"10.1007/s00146-024-02101-z","url":null,"abstract":"<p><p>In this article, we critique the ways in which the people involved in the development and application of AI systems often visualize and talk about AI systems. Often, they visualize such systems as shiny humanoid robots or as free-floating electronic brains. Such images convey misleading messages; as if AI works independently of people and can reason in ways superior to people. Instead, we propose to visualize AI systems as parts of larger, sociotechnical systems. Here, we can learn, for example, from cybernetics. Similarly, we propose that the people involved in the design and deployment of an algorithm would need to extend their conversations beyond the four boxes of the <i>Error Matrix</i>, for example, to critically discuss <i>false positives</i> and <i>false negatives</i>. We present two thought experiments, with one practical example in each. We propose to understand, visualize, and talk about AI systems in relation to a larger, complex reality; this is the requirement of <i>requisite variety</i>. We also propose to enable people from diverse disciplines to collaborate around <i>boundary objects</i>, for example: a drawing of an AI system in its sociotechnical context; or an 'extended' Error Matrix. Such interventions can promote meaningful human control, transparency, and fairness in the design and deployment of AI systems.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3615-3626"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI. 适应性机器人、伦理和信任:对可信赖的人工智能个人体验的定性和哲学探索。
IF 2.9
AI & Society Pub Date : 2025-01-01 Epub Date: 2024-04-23 DOI: 10.1007/s00146-024-01938-8
Stephanie Sheir, Arianna Manzini, Helen Smith, Jonathan Ives
{"title":"Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.","authors":"Stephanie Sheir, Arianna Manzini, Helen Smith, Jonathan Ives","doi":"10.1007/s00146-024-01938-8","DOIUrl":"https://doi.org/10.1007/s00146-024-01938-8","url":null,"abstract":"<p><p>Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology's trustworthiness, a developer's trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for 'the good', illustrating the moral heights to which technologies and developers ought to aspire, at times with a multitude of diverse requirements; or at other times, no specification at all. In philosophical circles, there is doubt that the concept of trust should be applied at all to technologies rather than their human creators. Nevertheless, people continue to intuitively reason about trust in technologies in their everyday language. This qualitative study employed an empirical ethics methodology to address how developers and users define and construct requirements for trust throughout development and use, through a series of interviews. We found that different accounts of trust (rational, affective, credentialist, norms based, relational) served as the basis for individual granting of trust in technologies and operators. Ultimately, the most significant requirement for user trust and assessment of trustworthiness was the accountability of AI developers for the outputs of AI systems, hinging on the identification of accountable moral agents and perceived value alignment between the user and developer's interests.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1735-1748"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11985637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Redefining intelligence: collaborative tinkering of healthcare professionals and algorithms as hybrid entity in public healthcare decision-making. 重新定义智能:医疗保健专业人员和算法在公共医疗保健决策中的混合实体的协作修补。
IF 2.9
AI & Society Pub Date : 2025-01-01 Epub Date: 2025-01-09 DOI: 10.1007/s00146-024-02177-7
Roanne van Voorst
{"title":"Redefining intelligence: collaborative tinkering of healthcare professionals and algorithms as hybrid entity in public healthcare decision-making.","authors":"Roanne van Voorst","doi":"10.1007/s00146-024-02177-7","DOIUrl":"10.1007/s00146-024-02177-7","url":null,"abstract":"<p><p>This paper analyzes the collaboration between healthcare professionals and algorithms in making decisions within the realm of public healthcare. By extending the concept of 'tinkering' from previous research conducted by philosopher Mol (Care in practice. On tinkering in clinics, homes and farms Verlag, Amsterdam, 2010) and anthropologist Pols (Health Care Anal 18: 374-388, 2009), who highlighted the improvisational and adaptive practices of healthcare professionals, this paper reveals that in the context of digitalizing healthcare, both professionals and algorithms engage in what I call 'collaborative tinkering' as they navigate the intricate and unpredictable nature of healthcare situations together. The paper draws upon an idea that is increasingly common in academic literature, namely that healthcare professionals and the algorithms they use can form a hybrid decision-making entity, challenging the conventional notion of agency and intelligence as being exclusively confined to individual humans or machines. Drawing upon an international, ethnographic study conducted in different hospitals around the world, the paper describes empirically how humans and algorithms come to decisions together, making explicit how, in the practice of daily work, agency and intelligence are distributed among a range of actors, including humans, technologies, knowledge resources, and the spaces where they interact. The concept of collaborative tinkering helps to make explicit how both healthcare professionals and algorithms engage in adaptive improvisation. This exploration not only enriches the understanding of collaborative dynamics between humans and AI but also problematizes the individualistic conception of AI that still exists in regulatory frameworks. By introducing empirical specificity through ethnographic insights and employing an anthropological perspective, the paper calls for a critical reassessment of current ethical and policy frameworks governing human-AI collaboration in healthcare, thereby illuminating direct implications for the future of AI ethics in medical practice.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3237-3248"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152098/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"I spend more time on the ecosystem than on the disease": caring for the communicative loop with everyday ADM technology through maintenance and modification work. “我花在生态系统上的时间比花在疾病上的时间多”:通过日常ADM技术的维护和修改工作,关心沟通回路。
IF 2.9
AI & Society Pub Date : 2025-01-01 Epub Date: 2024-11-15 DOI: 10.1007/s00146-024-02109-5
Sne Scott Hansen, Henriette Langstrup
{"title":"\"I spend more time on the ecosystem than on the disease\": caring for the communicative loop with everyday ADM technology through maintenance and modification work.","authors":"Sne Scott Hansen, Henriette Langstrup","doi":"10.1007/s00146-024-02109-5","DOIUrl":"10.1007/s00146-024-02109-5","url":null,"abstract":"<p><p>Automated decision-making (ADM) systems can be worn in and on the body for various purposes, such as for tracking and managing chronic conditions. One case in point is do-it-yourself open-source artificial pancreas systems, through which users engage in what is referred to as \"looping\"; combining continuous glucose monitors and insulin pumps placed on the body with digital communication technologies to develop an ADM system for personal diabetes management. The idea behind these personalized systems is to delegate decision-making regarding insulin to an algorithm that can make autonomous decisions. Based on interviews and photo diaries with Danish \"loopers\", this paper highlights two interrelated narratives of how users have to care for the loop by <i>maintaining</i> a stable communication circuit between body and ADM system, and by <i>modifying</i> the loop through analysis and reflection. It shows how the human takes turns with the ADM system through practical doings and anticipation to safeguard continuous management of chronic disease.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3707-3719"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152222/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From liability gaps to liability overlaps: shared responsibilities and fiduciary duties in AI and other complex technologies. 从责任差距到责任重叠:人工智能和其他复杂技术中的共同责任和受托义务。
IF 2.9
AI & Society Pub Date : 2025-01-01 Epub Date: 2025-01-11 DOI: 10.1007/s00146-024-02137-1
Bart Custers, Henning Lahmann, Benjamyn I Scott
{"title":"From liability gaps to liability overlaps: shared responsibilities and fiduciary duties in AI and other complex technologies.","authors":"Bart Custers, Henning Lahmann, Benjamyn I Scott","doi":"10.1007/s00146-024-02137-1","DOIUrl":"10.1007/s00146-024-02137-1","url":null,"abstract":"<p><p>Complex technologies such as Artificial Intelligence (AI) can cause harm, raising the question of who is liable for the harm caused. Research has identified multiple liability gaps (i.e., unsatisfactory outcomes when applying existing liability rules) in legal frameworks. In this paper, the concepts of shared responsibilities and fiduciary duties are explored as avenues to address liability gaps. The development, deployment and use of complex technologies are not clearly distinguishable stages, as often suggested, but are processes of cooperation and co-creation. At the intersections of these stages, shared responsibilities and fiduciary duties of multiple actors can be observed. Although none of the actors have complete control or a complete overview, many actors have some control or influence, and, therefore, responsibilities based on fault, prevention or benefit. Shared responsibilities and fiduciary duties can turn liability gaps into liability overlaps. These concepts could be implemented in tort and contract law by amending existing law (e.g., by assuming that all stakeholders are liable unless they can prove they did not owe a duty of care) and by creating more room for partial liability reflecting partial responsibilities (e.g., a responsibility to signal or identify an issue without a corresponding responsibility to solve that issue). This approach better aligns legal liabilities with responsibilities, increases legal certainty, and increases cooperation and understanding between actors, improving the quality and safety of technologies. However, it may not solve all liability gaps, may have chilling effects on innovation, and may require further detailing through case law.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"4035-4050"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152026/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI through the looking glass: an empirical study of structural social and ethical challenges in AI. 透过镜子的人工智能:人工智能中结构性社会和伦理挑战的实证研究。
IF 2.9
AI & Society Pub Date : 2025-01-01 Epub Date: 2024-12-15 DOI: 10.1007/s00146-024-02146-0
Mark Ryan, Nina de Roo, Hao Wang, Vincent Blok, Can Atik
{"title":"AI through the looking glass: an empirical study of structural social and ethical challenges in AI.","authors":"Mark Ryan, Nina de Roo, Hao Wang, Vincent Blok, Can Atik","doi":"10.1007/s00146-024-02146-0","DOIUrl":"10.1007/s00146-024-02146-0","url":null,"abstract":"<p><p>This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals' perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of designers and the competencies and skills of designers to take this responsibility, our results show that many structural challenges are beyond their reach. This result means that while ethics guidelines and AI ethics frameworks are helpful, there is a risk that they overlook more complicated, nuanced, and intersected structural challenges. In addition, it highlights the need to include diverse stakeholders, such as quadruple helix (QH) participants, in discussions around AI ethics rather than solely focusing on the obligations of AI developers and companies. Overall, this paper demonstrates that addressing structural challenges in AI is challenging and requires an approach that considers four requirements: (1) multi-level, (2) multi-faceted, (3) interdisciplinary, and (4) polycentric governance.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3891-3907"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信