AI & SocietyPub Date : 2025-03-17DOI: 10.1007/s00146-025-02207-y
Petra Jääskeläinen, Nickhil Kumar Sharma, Helen Pallett, Cecilia Åsberg
{"title":"Intersectional analysis of visual generative AI: the case of stable diffusion","authors":"Petra Jääskeläinen, Nickhil Kumar Sharma, Helen Pallett, Cecilia Åsberg","doi":"10.1007/s00146-025-02207-y","DOIUrl":"10.1007/s00146-025-02207-y","url":null,"abstract":"<div><p>Since 2022, Visual Generative AI (vGenAI) tools have experienced rapid adoption and garnered widespread acclaim for their ability to produce high-quality images with convincing photorealistic representations. These technologies mirror society’s prevailing visual politics in a mediated form, and actively contribute to the perpetuation of deeply ingrained assumptions, categories, values, and aesthetic representations. In this paper, we critically analyze Stable Diffusion (SD), a widely used open-source vGenAI tool, through visual and intersectional analysis. Our analysis covers; <i>(1) the aesthetics of the AI-generated visual material, (2) the institutional contexts in which these images are situated and produced, and (3) the intersections between power systems such as racism, colonialism, and capitalism</i>—which are both reflected and perpetuated through the visual aesthetics. Our visual analysis of 180 SD-generated images deliberately sought to produce representations along different lines of privilege and disadvantage—such as wealth/poverty or citizen/immigrant—drawing from feminist science and technology studies, visual media studies, and intersectional critical theory. We demonstrate how imagery produced through SD perpetuates pre-existing power systems such as sexism, racism, heteronormativity, and ableism, and assumes a default individual as white, able-bodied, and masculine-presenting. Furthermore, we problematize the hegemonic cultural values in the imagery that can be traced to the institutional context of these tools, particularly in the tendency towards Euro- and North America-centric cultural representations. Finally, we find that the power systems around SD result in the continual reproduction of harmful and violent imagery through technology, challenging the oft-underlying notion that vGenAI is culturally and aesthetically neutral. Based on the harms identified through our qualitative, interpretative analysis, we bring forth a reparative and social justice-oriented approach to vGenAI—including the need for acknowledging and rendering visible the cultural-aesthetic politics of this technology and engaging in reparative approaches that aim to symbolically and materially mend injustices enacted against social groups.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4341 - 4362"},"PeriodicalIF":4.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02207-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-14DOI: 10.1007/s00146-025-02247-4
Petr Spelda, Vit Stritecky
{"title":"Security practices in AI development","authors":"Petr Spelda, Vit Stritecky","doi":"10.1007/s00146-025-02247-4","DOIUrl":"10.1007/s00146-025-02247-4","url":null,"abstract":"<div><p>What makes safety claims about general purpose AI systems such as large language models trustworthy? We show that rather than the capabilities of security tools such as alignment and red teaming procedures, it is security practices based on these tools that contributed to reconfiguring the image of AI safety and made the claims acceptable. After showing what causes the gap between the capabilities of security tools and the desired safety guarantees, we critically investigate how AI security practices attempt to fill the gap and identify several shortcomings in diversity and participation. We found that these security practices are part of securitization processes aiming to support (commercial) development of general purpose AI systems whose trustworthiness can only be imperfectly tested instead of guaranteed. We conclude by offering several improvements to the current AI security practices.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4869 - 4879"},"PeriodicalIF":4.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02247-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-13DOI: 10.1007/s00146-025-02243-8
Miriam Lind
{"title":"Alexa’s agency: a corpus-based study on the linguistic attribution of humanlikeness to voice user interfaces","authors":"Miriam Lind","doi":"10.1007/s00146-025-02243-8","DOIUrl":"10.1007/s00146-025-02243-8","url":null,"abstract":"<div><p>Voice-based, spoken interaction with artificial agents has become a part of everyday life in many countries: artificial voices guide us through our bank’s customer service, Amazon’s Alexa tells us which groceries we need to buy, and we can discuss central motifs in Shakespeare’s work with ChatGPT. Language, which is largely still seen as a uniquely human capacity, is now increasingly produced—or so it appears—by non-human entities, contributing to their perception as being ‘human-like.’ The capacity for language is far from the only prototypically human feature attributed to ‘speaking’ machines; their potential agency, consciousness, and even sentience have been widely discussed in the media. This paper argues that a linguistic analysis of agency (based on semantic roles) and animacy can provide meaningful insights into the sociocultural conceptualisations of artificial entities as humanlike actors. A corpus-based analysis investigates the varying attributions of agency to the voice user interfaces Alexa, Siri, and Google Assistant in German media data. The analysis provides evidence for the important role that linguistic anthropomorphisation plays in the sociocultural attribution of agency and consciousness to artificial technological entities, and how particularly the practise of using personal names for these devices contributes to the attribution of humanlikeness: it will be highlighted how Amazon’s Alexa and Apple’s Siri are linguistically portrayed as sentient entities who listen, act, and have a mind of their own, whilst the lack of a personal name renders the Google Assistant much more recalcitrant to anthropomorphism.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4619 - 4633"},"PeriodicalIF":4.7,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02243-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-12DOI: 10.1007/s00146-025-02214-z
Kasper Trolle Elmholdt, Jeppe Agger Nielsen, Christoffer Koch Florczak, Roman Jurowetzki, Daniel Hain
{"title":"The hopes and fears of artificial intelligence: a comparative computational discourse analysis","authors":"Kasper Trolle Elmholdt, Jeppe Agger Nielsen, Christoffer Koch Florczak, Roman Jurowetzki, Daniel Hain","doi":"10.1007/s00146-025-02214-z","DOIUrl":"10.1007/s00146-025-02214-z","url":null,"abstract":"<div><p>Artificial intelligence (AI) has captured the interest of multiple actors with speculations about its benefits and dangers. Despite increasing scholarly attention to the discourses of AI, there are limited insights on how different groups interpret and debate AI and shape its opportunities for action. We consider AI an issue field understood as a contested phenomenon where heterogeneous actors assert and debate the meanings and consequences of AI. Drawing on computational social science methods, we analyzed large amounts of text on how politicians (parliamentarians) consultancies (high reputation firms), and lay experts (AI-forum Reddit users) articulate meanings about AI. Through topic modeling, we identified diverse and co-existing discourses: politicians predominantly articulated AI as a societal issue requiring an ethical response, consultancies stressed AI as a business opportunity pushing a transformation-oriented discourse, and lay experts expressed AI as a technical issue shaping a techno-feature discourse. Moreover, our analysis details the hopes and fears within AI discourses, revealing that sentiment varies by actor group. Based on these findings, we contribute new insights about AI as an issue field shaped by the discursive work performed by heterogeneous actors.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4765 - 4782"},"PeriodicalIF":4.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02214-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-12DOI: 10.1007/s00146-025-02239-4
Dean Curran, Elizabeth Cameron
{"title":"Metcalfe’s Law and its inversion: digital network expansion and systemic risk","authors":"Dean Curran, Elizabeth Cameron","doi":"10.1007/s00146-025-02239-4","DOIUrl":"10.1007/s00146-025-02239-4","url":null,"abstract":"<div><p>This paper examines contemporary digital insecurity through a critical confrontation with Metcalfe’s Law. Metcalfe’s Law—which states that the value of a network grows proportionally to the square of the size of the network—has been cited as a key reason for the astronomical growth in user base and market values of digital companies. This paper proposes a corresponding tendency alongside Metcalfe’s Law, namely that, as digital networks grow in size, there is a tendency towards a corresponding growth in systemic risk. Building on theories of systemic risk, this paper identifies key factors intensifying systemic risk, including: increasing network size increases the complexity and ‘attack surface’ of a network; increasing network size increases the ‘target-rich’ nature of the network; and the ‘layered’ robustness of the internet infrastructure in cases of cyber-security failures can provide an undamaged carrier of digital systemic risk. This paper then proceeds to show how developments in generative AI threaten to massively amplify the risks of ever-expanding digital networks.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4575 - 4587"},"PeriodicalIF":4.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-11DOI: 10.1007/s00146-025-02240-x
Harry Halpin
{"title":"Artificial intelligence versus collective intelligence","authors":"Harry Halpin","doi":"10.1007/s00146-025-02240-x","DOIUrl":"10.1007/s00146-025-02240-x","url":null,"abstract":"<div><p>The ontological presupposition of artificial intelligence (AI) is the liberal autonomous human subject of Locke and Kant, and the ideology of AI is the automation of this particular conception of intelligence. This is demonstrated in detail in classical AI by the work of Simon, who explicitly connected his work on AI to a wider programme in cognitive science, economics, and politics to perfect capitalism. Although Dreyfus produced a powerful Heideggerian critique of classical AI, work on neural networks in AI was ultimately based on the individual as the locus of intelligence. Yet this conception of AI both fails to grasp the essence of large language models, which are a statistical model of human language on the Web. The training data that enables AI is the surveillance and capture of data, where the data creates a model to approximate the entire world. However, there is a more hidden ideology inherent in AI where the goal is not to perfect a model but to control the world. As prompted by an argument between Mead and Bateson, social change is prevented by the application of cybernetics to society as a whole. The goal of AI is not just to replace human beings, but to manage humans to preserve existing power relations. As the source of intelligence in AI is distributed cognition between humans and machines, the alternative to AI is collective intelligence. As theorized by Licklider and Engelbart at the dawn of the Internet, collective intelligence explains how computers weave together both human and non-human intelligence. Rather than replace human intelligence, this produces ever more complex collective forms of intelligence. Rather than meta-stabilize a society of control, collective intelligence can go outside individualist capitalist ontology by incorporating the open world of the pluriverse, as theorized by Escobar. Collective intelligence then stands as an alternative ontological path for AI which puts intelligence at the service of humanity and the world rather than a technocratic elite.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4589 - 4604"},"PeriodicalIF":4.7,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02240-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A trans-disciplinary forensic study of Lil Miquela’s virtual identity performance in Instagram","authors":"Nashwa Elyamany, Yasser Omar Youssef, Nehal El-karef","doi":"10.1007/s00146-025-02219-8","DOIUrl":"10.1007/s00146-025-02219-8","url":null,"abstract":"<div><p>Virtual Influencers (VIs) have become the most prolific research subjects in human–computer interaction and mass media and communication studies from a plethora of perspectives. Developed to integrate social traits and anthropomorphic minds in their social media posts, human-like VIs engage with followers via visually authentic personae, emotionally captivating multimodal storytelling, and semio-pragmatic labor-intensive strategies in conformity with the expectations (and pressures) of the contemporary influencer culture. Informed by Belk’s revisited model of and timely scholarly works on <i>the extended self</i>, we introduce a new conceptualization of the virtual self that performs identity in platformized spaces. To examine virtual personae’s identity performance, we adopt a trans-disciplinary mixed-method forensic netnographic research design, synergizing computer vision, natural language processing, and semio-pragmatic analytical tools. A convenient sample of 334 (sponsored) posts, retrieved from the official Instagram account of the quintessential virtual agent <i>Lil Miquela</i>, is scrutinized taking into consideration her posts’ images and accompanying captions. The paper carries out the tripartite analysis in serious attempt to unravel: (a) how <i>humanoid</i> her <i>synthesized images</i> appear to the naked eye in quest of authenticity building; (b) the <i>techno-affects</i> that contribute to her identity performance; and (c) the <i>semio-pragmatic affordances</i> appropriated and deployed in Instagrammable spaces, showcasing how the three serve the performance of her digital identity. Valuable insights reveal that her agency draws heavily on algorithmization and semiotic immateriality to produce action. The study’s findings contribute to the existing body of literature on VIs and the extended self within the context of artificial intelligence.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4831 - 4854"},"PeriodicalIF":4.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02219-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-06DOI: 10.1007/s00146-025-02229-6
Adam Nix, Stephanie Decker, David A. Kirsch
{"title":"Conceptualising methodological diversity among born-digital users: insights from the garbage can model","authors":"Adam Nix, Stephanie Decker, David A. Kirsch","doi":"10.1007/s00146-025-02229-6","DOIUrl":"10.1007/s00146-025-02229-6","url":null,"abstract":"<div><p>The benefits of AI technologies in archival preservation are well recognised, though questions remain about their integration into existing processes. AI also shows promise for enhancing user experience and discovery in accessing born-digital materials. However, a limited understanding of the diverse methodological needs surrounding born-digital access risks the creation of one-size-fits-all solutions that suit certain approaches and research questions better than others. This article reviews current efforts in born-digital access and applies the Garbage Can Model from organisation theory to conceptualise the challenge of developing AI-based tools for multiple user types, highlighting the iterative and often decentralised nature of multi-stakeholder decision-making. We address this challenge by creating four born-digital archival user types—the aggregator, the synthesiser, the fact finder, and the narrator—each with distinct motivations and research approaches. Finally, we identify some new opportunities for stakeholders to inform how AI-based tools can be developed to better meet the variety of methodological needs that exist in relation to born-digital archives.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4499 - 4511"},"PeriodicalIF":4.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02229-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-04DOI: 10.1007/s00146-025-02193-1
Sara Elrawy, Bahaa Wagdy
{"title":"Perceptions of generative AI in the architectural profession in Egypt: opportunities, threats, concerns for the future, and steps to improve","authors":"Sara Elrawy, Bahaa Wagdy","doi":"10.1007/s00146-025-02193-1","DOIUrl":"10.1007/s00146-025-02193-1","url":null,"abstract":"<div><p>Generative AI has seen significant advances, particularly in text-to-image, with the potential to revolutionize industries, especially in creative fields such as art and design. This innovation is especially important in architecture, where idea visualization is critical. Text-to-image tools, a form of generative AI, enable architects and designers to visually bring their concepts to life. The study explores the impact of prompt-based AI generation on architecture, asking whether it is enhancing efficiency, creativity, and sustainability or threatening to replace architects. To address concerns about the role of AI in the profession, the research examines the perceptions of architecture professionals in Egypt. The authors conducted a survey and interviews with industry experts to assess the transformative impacts of AI on architecture. The findings reveal a strong awareness of AI's potential to enhance design quality and project outcomes, although some concerns about job prospects and control over AI outputs persist. Small firms view AI as vital for optimizing operations and attracting clients. Overall, AI shows promise in conceptualization and visualization, enhancing creativity and efficiency, with architects needing to adapt to AI as a tool for innovation rather than a competitor. Finally, the study proposes a roadmap for improving the use of AI in architecture.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4235 - 4263"},"PeriodicalIF":4.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02193-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2025-03-01DOI: 10.1007/s00146-025-02267-0
Karamjit S. Gill
{"title":"The end AI innocence: genie is out of the bottle","authors":"Karamjit S. Gill","doi":"10.1007/s00146-025-02267-0","DOIUrl":"10.1007/s00146-025-02267-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"257 - 261"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}