AI & SocietyPub Date : 2025-03-01DOI: 10.1007/s00146-025-02267-0
Karamjit S. Gill
{"title":"The end AI innocence: genie is out of the bottle","authors":"Karamjit S. Gill","doi":"10.1007/s00146-025-02267-0","DOIUrl":"10.1007/s00146-025-02267-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"257 - 261"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-01-01Epub Date: 2024-04-23DOI: 10.1007/s00146-024-01938-8
Stephanie Sheir, Arianna Manzini, Helen Smith, Jonathan Ives
{"title":"Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.","authors":"Stephanie Sheir, Arianna Manzini, Helen Smith, Jonathan Ives","doi":"10.1007/s00146-024-01938-8","DOIUrl":"https://doi.org/10.1007/s00146-024-01938-8","url":null,"abstract":"<p><p>Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology's trustworthiness, a developer's trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for 'the good', illustrating the moral heights to which technologies and developers ought to aspire, at times with a multitude of diverse requirements; or at other times, no specification at all. In philosophical circles, there is doubt that the concept of trust should be applied at all to technologies rather than their human creators. Nevertheless, people continue to intuitively reason about trust in technologies in their everyday language. This qualitative study employed an empirical ethics methodology to address how developers and users define and construct requirements for trust throughout development and use, through a series of interviews. We found that different accounts of trust (rational, affective, credentialist, norms based, relational) served as the basis for individual granting of trust in technologies and operators. Ultimately, the most significant requirement for user trust and assessment of trustworthiness was the accountability of AI developers for the outputs of AI systems, hinging on the identification of accountable moral agents and perceived value alignment between the user and developer's interests.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1735-1748"},"PeriodicalIF":2.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11985637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-12-04DOI: 10.1007/s00146-024-02132-6
Ayşe Aslı Bozdağ
{"title":"The AI-mediated intimacy economy: a paradigm shift in digital interactions","authors":"Ayşe Aslı Bozdağ","doi":"10.1007/s00146-024-02132-6","DOIUrl":"10.1007/s00146-024-02132-6","url":null,"abstract":"<div><p>This article critically examines the paradigm shift from the attention economy to the intimacy economy—a market system where personal and emotional data are exchanged for customized experiences that cater to individual emotional and psychological needs. It explores how AI transforms these personal and emotional inputs into services, thereby raising essential questions about the authenticity of digital interactions and the potential commodification of intimate experiences. The study delineates the roles of human–computer interaction and AI in deepening personal connections, significantly impacting emotional dynamics, and underscores AI’s role in various applications, from healthcare to grief tech, highlighting both enhancements in emotional connections and potential disruptions to genuine human interactions. An AI-mediated framework (AMIE) is introduced to assess how AI reshapes these connections and the overall digital society through personalized interactions. This framework explores the interplay between human emotions and AI-generated responses within the new Avatar Sphere, emphasizing the necessity for regulatory measures to safeguard digital identities, recognize emotional data as intellectual property, and maintain system transparency. It highlights the critical need for maintaining genuine human interactions and advocates for context-aware consent, continuous monitoring, and cross-cultural considerations to foster ethical AI practices. Leveraging blockchain technology and decentralized autonomous organizations, the framework proposes methods to enhance individual control over emotional data, mitigating the commodification risks. The findings contribute to ongoing discussions on AI ethics, digital privacy, and the future of human-AI interactions, providing valuable insights for cultivating a responsible intimacy economy.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2285 - 2306"},"PeriodicalIF":2.9,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-12-04DOI: 10.1007/s00146-024-02106-8
Rush T. Stewart
{"title":"The ideals program in algorithmic fairness","authors":"Rush T. Stewart","doi":"10.1007/s00146-024-02106-8","DOIUrl":"10.1007/s00146-024-02106-8","url":null,"abstract":"<div><p>I consider statistical criteria of algorithmic fairness from the perspective of the <i>ideals</i> of fairness to which these criteria are committed. I distinguish and describe three theoretical roles such ideals might play. The usefulness of this program is illustrated by taking Base Rate Tracking and its ratio variant as a case study. I identify and compare the ideals of these two criteria, then consider them in each of the aforementioned three roles for ideals. This ideal program may present a way forward in the normative evaluation of candidate statistical criteria of algorithmic fairness.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2273 - 2283"},"PeriodicalIF":2.9,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-12-04DOI: 10.1007/s00146-024-02144-2
Masoumeh Mansouri, Henry Taylor
{"title":"A culture of their own? culture in robot-robot interaction","authors":"Masoumeh Mansouri, Henry Taylor","doi":"10.1007/s00146-024-02144-2","DOIUrl":"10.1007/s00146-024-02144-2","url":null,"abstract":"<div><p>This paper presents a framework for studying culture in the context of robot-robot interaction (RRI). We examine the claim that groups of robots can share a culture, even independently of their relationship with humans. At the centre of our framework is a recognition that ‘culture’ is a concept that can be defined and understood in many different ways. As we demonstrate, which definition of ‘culture’ one employs has important consequences for the question of whether groups of robots can have their own culture, and what kind of culture they can have. We suggest that this argument has important consequences for robotics from an ethical/legal perspective.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2307 - 2317"},"PeriodicalIF":2.9,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-02144-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-12-02DOI: 10.1007/s00146-024-02149-x
Abdul Rohman, Diem-Trang Vo
{"title":"“To us, it is still foreign”: AI and the disabled in the Global South","authors":"Abdul Rohman, Diem-Trang Vo","doi":"10.1007/s00146-024-02149-x","DOIUrl":"10.1007/s00146-024-02149-x","url":null,"abstract":"<div><p>Although AI technologies reportedly can address accessibility issues and the risks have been documented, debates around AI have left developing countries and people with disabilities (PwDs) behind. Despite the global marketization of AI technologies, the understanding of AI and disability in developing countries in the Global South remains scant. Through semi-structured interviews with key personnel of disabled people organizations in Indonesia and Vietnam, this study found that a pocket of the disabled viewed AI as formidable but foreign because of the persistent information void within the disabled community. AI potentially magnifies the existing bias against the disabled, but their unique features and lived experiences are irreplaceable by AI. The findings seek attention from developers, activists, and policy makers in emerging markets as the benefits of AI have reached wider audiences but PwDs and the risks of AI–human interactions to them have been narrowly discussed in Southeast Asia (SEA).</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2259 - 2271"},"PeriodicalIF":2.9,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-12-01DOI: 10.1007/s00146-024-02141-5
Angela M. Cirucci, Miles Coleman, Dan Strasser, Evan Garaizar
{"title":"Culturally responsive communication in generative AI: looking at ChatGPT’s advice for coming out","authors":"Angela M. Cirucci, Miles Coleman, Dan Strasser, Evan Garaizar","doi":"10.1007/s00146-024-02141-5","DOIUrl":"10.1007/s00146-024-02141-5","url":null,"abstract":"<div><p>Generative AI has captured the public imagination as a tool that promises access to expertise beyond the technical jargon and expense that traditionally characterize such infospheres as those of medicine and law. Largely absent from the current literature, however, are interrogations of generative AI’s abilities to deal in culturally responsive communication, or the expertise interwoven with culturally aware, socially responsible, and personally sensitive communication best practices. To interrogate the possibilities of cultural responsiveness in generative AI, we examine the patterns of response that characterize ChatGPT-3.5’s advice for coming out. Specifically, we submitted 100 prompts soliciting coming out advice to GPT-3.5, variegating each of those prompts slightly to account for intersectional identities. From the analysis, we find that, while the responses are largely in-line with best practices, there are also instances that might represent problematics concerning the interpellation of the user or the persons to whom one is coming out.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2249 - 2257"},"PeriodicalIF":2.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-02141-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-11-30DOI: 10.1007/s00146-024-02135-3
Ethan Landes, Cristina Voinea, Radu Uszkai
{"title":"Rage against the authority machines: how to design artificial moral advisors for moral enhancement","authors":"Ethan Landes, Cristina Voinea, Radu Uszkai","doi":"10.1007/s00146-024-02135-3","DOIUrl":"10.1007/s00146-024-02135-3","url":null,"abstract":"<div><p>This paper aims to clear up the epistemology of learning morality from artificial moral advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely telling them how to act. Drawing upon these insights, we claim that if AMAs are to genuinely enhance people morally, they should be designed as inspiration and not authority machines. In the final section, we evaluate existing AMA models to shed light on which holds the most promise for helping to make users better moral agents.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 4","pages":"2237 - 2248"},"PeriodicalIF":2.9,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-02135-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}