AI & SocietyPub Date : 2025-03-01DOI: 10.1007/s00146-025-02267-0
Karamjit S. Gill
{"title":"The end AI innocence: genie is out of the bottle","authors":"Karamjit S. Gill","doi":"10.1007/s00146-025-02267-0","DOIUrl":"10.1007/s00146-025-02267-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"257 - 261"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2024-08-07DOI: 10.1007/s00146-024-02008-9
Ayei Ibor, Mark Hooper, Carsten Maple, Jon Crowcroft, Gregory Epiphaniou
{"title":"Considerations for trustworthy cross-border interoperability of digital identity systems in developing countries.","authors":"Ayei Ibor, Mark Hooper, Carsten Maple, Jon Crowcroft, Gregory Epiphaniou","doi":"10.1007/s00146-024-02008-9","DOIUrl":"10.1007/s00146-024-02008-9","url":null,"abstract":"<p><p>In developing nations, the implementation of Foundational Identity Systems (FIDS) has optimised service delivery and inclusive economic growth. Cross-border e-government will gain traction as developing countries increasingly look to identity federation and trustworthy interoperability through FIDS for the identification and authentication of identity holders. Despite this potential, the interoperability of FIDS in the African identity ecosystem has not been well-studied. Among the difficulties in this situation are the intricate internal political dynamics that have led to weak institutions, suggesting that FIDS could be used for political purposes; additionally, citizens' or identity holders' habitual low trust in the government raises concerns about data security and privacy protection. Similarly, vendor lock-in, cross-system compatibility, and ambiguous legislative rules for data exchange are other concerns. Interoperability is fundamentally necessary as a precondition for e-government services and serves as the foundation for the best possible service delivery in the areas of social security, education, and finance, as well as gender equality as demonstrated by the European Union (EU). Moreover, the integration of cross-border FIDS and an ecosystem of effective data governance will be created by unified data sharing via an interoperable identity system. Thus, in this study, we point to the challenges, opportunities, and requirements for cross-border interoperability in an African setting. Furthermore, we investigated current interoperability solutions such as the EU's eIDAS and Estonian X-Road and proposed an approach for scoping requirements to achieve a fully functional interoperable identity ecosystem in the African setting. Our findings show that interoperability in the African identity ecosystem is essential for expanding the scope of e-government throughout the continent and for bolstering the smooth authentication and verification of identity holders for inclusive economic growth.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2729-2750"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144095483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2024-10-29DOI: 10.1007/s00146-024-02101-z
Marc Steen, Tjerk Timan, Jurriaan Van Diggelen, Steven Vethman
{"title":"We need better images of AI and better conversations about AI.","authors":"Marc Steen, Tjerk Timan, Jurriaan Van Diggelen, Steven Vethman","doi":"10.1007/s00146-024-02101-z","DOIUrl":"10.1007/s00146-024-02101-z","url":null,"abstract":"<p><p>In this article, we critique the ways in which the people involved in the development and application of AI systems often visualize and talk about AI systems. Often, they visualize such systems as shiny humanoid robots or as free-floating electronic brains. Such images convey misleading messages; as if AI works independently of people and can reason in ways superior to people. Instead, we propose to visualize AI systems as parts of larger, sociotechnical systems. Here, we can learn, for example, from cybernetics. Similarly, we propose that the people involved in the design and deployment of an algorithm would need to extend their conversations beyond the four boxes of the <i>Error Matrix</i>, for example, to critically discuss <i>false positives</i> and <i>false negatives</i>. We present two thought experiments, with one practical example in each. We propose to understand, visualize, and talk about AI systems in relation to a larger, complex reality; this is the requirement of <i>requisite variety</i>. We also propose to enable people from diverse disciplines to collaborate around <i>boundary objects</i>, for example: a drawing of an AI system in its sociotechnical context; or an 'extended' Error Matrix. Such interventions can promote meaningful human control, transparency, and fairness in the design and deployment of AI systems.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3615-3626"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2024-04-23DOI: 10.1007/s00146-024-01938-8
Stephanie Sheir, Arianna Manzini, Helen Smith, Jonathan Ives
{"title":"Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.","authors":"Stephanie Sheir, Arianna Manzini, Helen Smith, Jonathan Ives","doi":"10.1007/s00146-024-01938-8","DOIUrl":"https://doi.org/10.1007/s00146-024-01938-8","url":null,"abstract":"<p><p>Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology's trustworthiness, a developer's trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for 'the good', illustrating the moral heights to which technologies and developers ought to aspire, at times with a multitude of diverse requirements; or at other times, no specification at all. In philosophical circles, there is doubt that the concept of trust should be applied at all to technologies rather than their human creators. Nevertheless, people continue to intuitively reason about trust in technologies in their everyday language. This qualitative study employed an empirical ethics methodology to address how developers and users define and construct requirements for trust throughout development and use, through a series of interviews. We found that different accounts of trust (rational, affective, credentialist, norms based, relational) served as the basis for individual granting of trust in technologies and operators. Ultimately, the most significant requirement for user trust and assessment of trustworthiness was the accountability of AI developers for the outputs of AI systems, hinging on the identification of accountable moral agents and perceived value alignment between the user and developer's interests.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1735-1748"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11985637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2025-01-09DOI: 10.1007/s00146-024-02177-7
Roanne van Voorst
{"title":"Redefining intelligence: collaborative tinkering of healthcare professionals and algorithms as hybrid entity in public healthcare decision-making.","authors":"Roanne van Voorst","doi":"10.1007/s00146-024-02177-7","DOIUrl":"10.1007/s00146-024-02177-7","url":null,"abstract":"<p><p>This paper analyzes the collaboration between healthcare professionals and algorithms in making decisions within the realm of public healthcare. By extending the concept of 'tinkering' from previous research conducted by philosopher Mol (Care in practice. On tinkering in clinics, homes and farms Verlag, Amsterdam, 2010) and anthropologist Pols (Health Care Anal 18: 374-388, 2009), who highlighted the improvisational and adaptive practices of healthcare professionals, this paper reveals that in the context of digitalizing healthcare, both professionals and algorithms engage in what I call 'collaborative tinkering' as they navigate the intricate and unpredictable nature of healthcare situations together. The paper draws upon an idea that is increasingly common in academic literature, namely that healthcare professionals and the algorithms they use can form a hybrid decision-making entity, challenging the conventional notion of agency and intelligence as being exclusively confined to individual humans or machines. Drawing upon an international, ethnographic study conducted in different hospitals around the world, the paper describes empirically how humans and algorithms come to decisions together, making explicit how, in the practice of daily work, agency and intelligence are distributed among a range of actors, including humans, technologies, knowledge resources, and the spaces where they interact. The concept of collaborative tinkering helps to make explicit how both healthcare professionals and algorithms engage in adaptive improvisation. This exploration not only enriches the understanding of collaborative dynamics between humans and AI but also problematizes the individualistic conception of AI that still exists in regulatory frameworks. By introducing empirical specificity through ethnographic insights and employing an anthropological perspective, the paper calls for a critical reassessment of current ethical and policy frameworks governing human-AI collaboration in healthcare, thereby illuminating direct implications for the future of AI ethics in medical practice.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3237-3248"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152098/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2024-11-15DOI: 10.1007/s00146-024-02109-5
Sne Scott Hansen, Henriette Langstrup
{"title":"\"I spend more time on the ecosystem than on the disease\": caring for the communicative loop with everyday ADM technology through maintenance and modification work.","authors":"Sne Scott Hansen, Henriette Langstrup","doi":"10.1007/s00146-024-02109-5","DOIUrl":"10.1007/s00146-024-02109-5","url":null,"abstract":"<p><p>Automated decision-making (ADM) systems can be worn in and on the body for various purposes, such as for tracking and managing chronic conditions. One case in point is do-it-yourself open-source artificial pancreas systems, through which users engage in what is referred to as \"looping\"; combining continuous glucose monitors and insulin pumps placed on the body with digital communication technologies to develop an ADM system for personal diabetes management. The idea behind these personalized systems is to delegate decision-making regarding insulin to an algorithm that can make autonomous decisions. Based on interviews and photo diaries with Danish \"loopers\", this paper highlights two interrelated narratives of how users have to care for the loop by <i>maintaining</i> a stable communication circuit between body and ADM system, and by <i>modifying</i> the loop through analysis and reflection. It shows how the human takes turns with the ADM system through practical doings and anticipation to safeguard continuous management of chronic disease.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3707-3719"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152222/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2025-01-11DOI: 10.1007/s00146-024-02137-1
Bart Custers, Henning Lahmann, Benjamyn I Scott
{"title":"From liability gaps to liability overlaps: shared responsibilities and fiduciary duties in AI and other complex technologies.","authors":"Bart Custers, Henning Lahmann, Benjamyn I Scott","doi":"10.1007/s00146-024-02137-1","DOIUrl":"10.1007/s00146-024-02137-1","url":null,"abstract":"<p><p>Complex technologies such as Artificial Intelligence (AI) can cause harm, raising the question of who is liable for the harm caused. Research has identified multiple liability gaps (i.e., unsatisfactory outcomes when applying existing liability rules) in legal frameworks. In this paper, the concepts of shared responsibilities and fiduciary duties are explored as avenues to address liability gaps. The development, deployment and use of complex technologies are not clearly distinguishable stages, as often suggested, but are processes of cooperation and co-creation. At the intersections of these stages, shared responsibilities and fiduciary duties of multiple actors can be observed. Although none of the actors have complete control or a complete overview, many actors have some control or influence, and, therefore, responsibilities based on fault, prevention or benefit. Shared responsibilities and fiduciary duties can turn liability gaps into liability overlaps. These concepts could be implemented in tort and contract law by amending existing law (e.g., by assuming that all stakeholders are liable unless they can prove they did not owe a duty of care) and by creating more room for partial liability reflecting partial responsibilities (e.g., a responsibility to signal or identify an issue without a corresponding responsibility to solve that issue). This approach better aligns legal liabilities with responsibilities, increases legal certainty, and increases cooperation and understanding between actors, improving the quality and safety of technologies. However, it may not solve all liability gaps, may have chilling effects on innovation, and may require further detailing through case law.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"4035-4050"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152026/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2024-12-15DOI: 10.1007/s00146-024-02146-0
Mark Ryan, Nina de Roo, Hao Wang, Vincent Blok, Can Atik
{"title":"AI through the looking glass: an empirical study of structural social and ethical challenges in AI.","authors":"Mark Ryan, Nina de Roo, Hao Wang, Vincent Blok, Can Atik","doi":"10.1007/s00146-024-02146-0","DOIUrl":"10.1007/s00146-024-02146-0","url":null,"abstract":"<p><p>This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals' perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of designers and the competencies and skills of designers to take this responsibility, our results show that many structural challenges are beyond their reach. This result means that while ethics guidelines and AI ethics frameworks are helpful, there is a risk that they overlook more complicated, nuanced, and intersected structural challenges. In addition, it highlights the need to include diverse stakeholders, such as quadruple helix (QH) participants, in discussions around AI ethics rather than solely focusing on the obligations of AI developers and companies. Overall, this paper demonstrates that addressing structural challenges in AI is challenging and requires an approach that considers four requirements: (1) multi-level, (2) multi-faceted, (3) interdisciplinary, and (4) polycentric governance.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3891-3907"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12152039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}