AI & SocietyPub Date : 2024-03-20DOI: 10.1007/s00146-024-01900-8
Lucas Freund
{"title":"Beyond the physical self: understanding the perversion of reality and the desire for digital transcendence via digital avatars in the context of Baudrillard’s theory","authors":"Lucas Freund","doi":"10.1007/s00146-024-01900-8","DOIUrl":"10.1007/s00146-024-01900-8","url":null,"abstract":"<div><p>This paper explores the perversion of reality in the context of advanced technologies, such as AI, VR, and AR, through the lens of Jean Baudrillard’s theory of hyperreality and the precession of simulacra. By examining the transformative effects of these technologies on our perception of reality, with a particular focus on the usage of digital avatars, the paper highlights the blurred distinction between the real and the simulated, where the copy becomes more ‘real’ than the original. Drawing on Baudrillard’s concept of hyperreality, the paper delves into the perversion of reality as individuals seek refuge in virtual pleasure paradises and embrace artificial pleasures through their digital avatars, disconnecting from genuine human experiences. The convergence of AI, VR, and AR technologies amplifies this hyperreal condition, where digital avatars mimic or surpass the depth of human relationships, challenging our understanding of what is real. In line with Baudrillard’s theory, the paper explores the objectification and commodification of reality within digital spaces, specifically examining the digital avatars’ role in the erosion of genuine human connections. It explores the implications of these avatars in terms of consent, exploitation, and loss of authenticity, echoing Baudrillard’s concerns about the distortion of reality in contemporary society. Recognizing the implications of these technologies, the paper calls for a critical reflection on their transformative power. It emphasizes the need for a nuanced understanding of the hyperreal condition and ethical responsibility in engaging with AI, VR, and AR, particularly in relation to the usage of digital avatars. By resisting the seductive allure of digital escapism and preserving genuine human connections, we can navigate the perversion of reality and cultivate empathy, compassion, and meaningful interactions that transcend the simulated experiences offered by technology.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"859 - 875"},"PeriodicalIF":2.9,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140226715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-17DOI: 10.1007/s00146-024-01891-6
David Guile, Jelena Popov
{"title":"Machine learning and human learning: a socio-cultural and -material perspective on their relationship and the implications for researching working and learning","authors":"David Guile, Jelena Popov","doi":"10.1007/s00146-024-01891-6","DOIUrl":"10.1007/s00146-024-01891-6","url":null,"abstract":"<div><p>The paper adopts an inter-theoretical socio-cultural and -material perspective on the relationship between human + machine learning to propose a new way to investigate the human + machine assistive assemblages emerging in professional work (e.g. medicine, architecture, design and engineering). Its starting point is Hutchins’s (1995a) concept of ‘distributed cognition’ and his argument that his concept of ‘cultural ecosystems’ constitutes a unit of analysis to investigate collective human + machine working and learning (Hutchins, Philos Psychol 27:39–49, 2013). It argues that: (i) the former offers a way to reveal the cultural constitution of and enactment of human + machine cognition and, in the process, the limitations of the computational and connectionist assumptions about learning that underpin, respectively, good old-fashioned AI and deep learning; and (2) the latter offers a way to identify, when amplified with insights from Socio-Materialism and Cultural-Historical Activity Theory, how ML is further rearranging and reorganising the distributed basis of cognition in assistive assemblages. The paper concludes by outlining a set of conjectures researchers that could use to guide their investigations into the ongoing design and deployment of HL + ML assemblages and challenges associated with the interaction between HL + ML.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"325 - 338"},"PeriodicalIF":2.9,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01891-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140235327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-16DOI: 10.1007/s00146-024-01894-3
Andrea Slane, Isabel Pedersen
{"title":"Bringing older people’s perspectives on consumer socially assistive robots into debates about the future of privacy protection and AI governance","authors":"Andrea Slane, Isabel Pedersen","doi":"10.1007/s00146-024-01894-3","DOIUrl":"10.1007/s00146-024-01894-3","url":null,"abstract":"<div><p>A growing number of consumer technology companies are aiming to convince older people that humanoid robots make helpful tools to support aging-in-place. As hybrid devices, socially assistive robots (SARs) are situated between health monitoring tools, familiar digital assistants, security aids, and more advanced AI-powered devices. Consequently, they implicate older people’s privacy in complex ways. Such devices are marketed to perform functions common to smart speakers (e.g., Amazon Echo) and smart home platforms (e.g., Google Home), while other functions are more specific to older people, including health and safety monitoring and serving as companions to mitigate loneliness. Privacy is a key value central to debates about the ethics of using SARs in aged care, yet there has been very little interchange between these debates and the robust theoretical discussion in the legal literature about the future of privacy and AI governance. Drawing on two qualitative studies of older people’s views on consumer SARs, the paper contributes novel findings about older people’s thinking on privacy and data governance at the intersection of their experiences with present day digital technologies and projections for future AI systems, and places their views in dialogue with debates about the future of privacy protection and AI governance.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"691 - 710"},"PeriodicalIF":2.9,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140236482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-16DOI: 10.1007/s00146-024-01886-3
Teresa Scantamburlo, Joachim Baumann, Christoph Heitz
{"title":"On prediction-modelers and decision-makers: why fairness requires more than a fair prediction model","authors":"Teresa Scantamburlo, Joachim Baumann, Christoph Heitz","doi":"10.1007/s00146-024-01886-3","DOIUrl":"10.1007/s00146-024-01886-3","url":null,"abstract":"<div><p>An implicit ambiguity in the field of prediction-based decision-making concerns the relation between the concepts of prediction and decision. Much of the literature in the field tends to blur the boundaries between the two concepts and often simply refers to ‘fair prediction’. In this paper, we point out that a differentiation of these concepts is helpful when trying to implement algorithmic fairness. Even if fairness properties are related to the features of the used prediction model, what is more properly called ‘fair’ or ‘unfair’ is a decision system, not a prediction model. This is because fairness is about the consequences on human lives, created by a decision, not by a prediction. In this paper, we clarify the distinction between the concepts of prediction and decision and show the different ways in which these two elements influence the final fairness properties of a prediction-based decision system. As well as discussing this relationship both from a conceptual and a practical point of view, we propose a framework that enables a better understanding and reasoning of the conceptual logic of creating fairness in prediction-based decision-making. In our framework, we specify different roles, namely the ‘prediction-modeler’ and the ‘decision-maker,’ and the information required from each of them for being able to implement fairness of the system. Our framework allows for deriving distinct responsibilities for both roles and discussing some insights related to ethical and legal requirements. Our contribution is twofold. First, we offer a new perspective shifting the focus from an abstract concept of algorithmic fairness to the concrete context-dependent nature of algorithmic decision-making, where different actors exist, can have different goals, and may act independently. In addition, we provide a conceptual framework that can help structure prediction-based decision problems with respect to fairness issues, identify responsibilities, and implement fairness governance mechanisms in real-world scenarios.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"353 - 369"},"PeriodicalIF":2.9,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01886-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-15DOI: 10.1007/s00146-024-01889-0
Bojan Obrenovic, Xiao Gu, Guoyu Wang, Danijela Godinic, Ilimdorjon Jakhongirov
{"title":"Generative AI and human–robot interaction: implications and future agenda for business, society and ethics","authors":"Bojan Obrenovic, Xiao Gu, Guoyu Wang, Danijela Godinic, Ilimdorjon Jakhongirov","doi":"10.1007/s00146-024-01889-0","DOIUrl":"10.1007/s00146-024-01889-0","url":null,"abstract":"<div><p>The revolution of artificial intelligence (AI), particularly generative AI, and its implications for human–robot interaction (HRI) opened up the debate on crucial regulatory, business, societal, and ethical considerations. This paper explores essential issues from the anthropomorphic perspective, examining the complex interplay between humans and AI models in societal and corporate contexts. We provided a comprehensive review of existing literature on HRI, with a special emphasis on the impact of generative models such as ChatGPT. The scientometric study posits that due to their advanced linguistic capabilities and ability to mimic human-like behavior, generative AIs like ChatGPT will continue to grow in popularity in pair with human rational empathy, tendency for personification and their advanced linguistic capabilities and ability to mimic human-like behavior. As they blur the boundaries between humans and robots, these models introduce fresh moral and philosophical dilemmas. Our research aims to extrapolate key trends and unique factors in HRI and to elucidate the technical aspects of generative AI that enhance its effectiveness in this field compared to traditional rule-based AI systems. We further discuss the challenges and limitations of applying generative AI in HRI, providing a future research agenda for AI optimization in diverse applications, including education, entertainment, and healthcare.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"677 - 690"},"PeriodicalIF":2.9,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140237956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-11DOI: 10.1007/s00146-024-01877-4
Jens Christian Bjerring, Jacob Busch
{"title":"Artificial intelligence and identity: the rise of the statistical individual","authors":"Jens Christian Bjerring, Jacob Busch","doi":"10.1007/s00146-024-01877-4","DOIUrl":"10.1007/s00146-024-01877-4","url":null,"abstract":"<div><p>Algorithms are used across a wide range of societal sectors such as banking, administration, and healthcare to make predictions that impact on our lives. While the predictions can be incredibly accurate about our present and future behavior, there is an important question about how these algorithms in fact represent human identity. In this paper, we explore this question and argue that machine learning algorithms represent human identity in terms of what we shall call the <i>statistical individual</i>. This statisticalized representation of individuals, we shall argue, differs significantly from our ordinary conception of human identity, which is tightly intertwined with considerations about biological, psychological, and narrative continuity—as witnessed by our most well-established philosophical views on personal identity. Indeed, algorithmic representations of individuals give no special attention to biological, psychological, and narrative continuity and instead rely on predictive properties that significantly exceed and diverge from those that we would ordinarily take to be relevant for questions about how we are.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"311 - 323"},"PeriodicalIF":2.9,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01877-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140251779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-10DOI: 10.1007/s00146-024-01885-4
Charles Shaaba Saba, Nara Monkam
{"title":"Leveraging the potential of artificial intelligence (AI) in exploring the interplay among tax revenue, institutional quality, and economic growth in the G-7 countries","authors":"Charles Shaaba Saba, Nara Monkam","doi":"10.1007/s00146-024-01885-4","DOIUrl":"10.1007/s00146-024-01885-4","url":null,"abstract":"<div><p>Due to G-7 countries' commitment to sustaining United Nations Sustainable Development Goal 8, which focuses on sustainable economic growth, there is a need to investigate the impact of tax revenue and institutional quality on economic growth, considering the role of artificial intelligence (AI) in the G-7 countries from 2012 to 2022. Cross-Sectional Augmented Autoregressive Distributed Lag (CS-ARDL) technique is used to analyze the data. The study's findings indicate a long-run equilibrium relationship among the variables under examination. The causality results can be categorized as bidirectional, unidirectional, or indicating no causality. Based on the CS-ARDL results, the study recommends that G-7 governments and policymakers prioritize and strengthen the integration of AI into their institutions to stimulate growth in both the short- and long-term. However, the study cautions against overlooking the interaction between AI and tax revenue, as it did not demonstrate support for economic growth. While the interaction between AI and institutional quality shows potential for contributing to growth, it is crucial to implement robust measures to mitigate any potential negative effects that may arise from AI's interaction with tax systems. Therefore, the study suggests the development of AI-friendly tax policies within the G-7 countries, considering the nascent nature of the AI sector/industry.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"653 - 675"},"PeriodicalIF":2.9,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01885-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140254980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-07DOI: 10.1007/s00146-024-01882-7
Sarah Pink, Emma Quilty, John Grundy, Rashina Hoda
{"title":"Trust, artificial intelligence and software practitioners: an interdisciplinary agenda","authors":"Sarah Pink, Emma Quilty, John Grundy, Rashina Hoda","doi":"10.1007/s00146-024-01882-7","DOIUrl":"10.1007/s00146-024-01882-7","url":null,"abstract":"<div><p>Trust and trustworthiness are central concepts in contemporary discussions about the ethics of and qualities associated with artificial intelligence (AI) and the relationships between people, organisations and AI. In this article we develop an interdisciplinary approach, using socio-technical software engineering and design anthropological approaches, to investigate how trust and trustworthiness concepts are articulated and performed by AI software practitioners. We examine how trust and trustworthiness are defined in relation to AI across these disciplines, and investigate how AI, trust and trustworthiness are conceptualised and experienced through an ethnographic study of the work practices of nine practitioners in the software industry. We present key implications of our findings for the generation of trust and trustworthiness and for the training and education of future software practitioners.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"639 - 652"},"PeriodicalIF":2.9,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01882-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140259137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-03-01DOI: 10.1007/s00146-024-01862-x
Oliver Bown
{"title":"Blind search and flexible product visions: the sociotechnical shaping of generative music engines","authors":"Oliver Bown","doi":"10.1007/s00146-024-01862-x","DOIUrl":"10.1007/s00146-024-01862-x","url":null,"abstract":"<div><p>Amidst the surge in AI-oriented commercial ventures, music is a site of intensive efforts to innovate. A number of companies are seeking to apply AI to music production and consumption, and amongst them several are seeking to reinvent the music listening experience as adaptive, interactive, functional and infinitely generative. These are bold objectives, having no clear roadmap for what designs, technologies and use cases, if any, will be successful. Thus each company relies on speculative product visions. Through four case studies of such companies, I consider how product visions must carefully provide a clear plan for developers and investors, whilst also remaining open to agile user-centred product development strategies, which I discuss in terms of the ‘blind search’ nature of innovation. I suggest that innovation in this area needs to be understood in terms of technological emergence, which is neither technologically determinist nor driven entirely by the visions of founders, but through a complex of interacting forces. I also consider, through these cases, how, through the accumulation of residual value, all such start-up work risks being exapted for more familiar extractive capitalist agendas under the general process that Doctorow calls “enshittification”. Lastly, I consider a number of other more specific ways in which these projects, if their growth is achieved, could influence music culture more broadly.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"585 - 603"},"PeriodicalIF":2.9,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01862-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140089628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-02-28DOI: 10.1007/s00146-024-01875-6
Giorgia Pozzi, Juan M. Durán
{"title":"From ethics to epistemology and back again: informativeness and epistemic injustice in explanatory medical machine learning","authors":"Giorgia Pozzi, Juan M. Durán","doi":"10.1007/s00146-024-01875-6","DOIUrl":"10.1007/s00146-024-01875-6","url":null,"abstract":"<div><p>In this paper, we discuss epistemic and ethical concerns brought about by machine learning (ML) systems implemented in medicine. We begin by fleshing out the logic underlying a common approach in the specialized literature (which we call the <i>informativeness account</i>). We maintain that the informativeness account limits its analysis to the impact of epistemological issues on ethical concerns without assessing the bearings that ethical features have on the epistemological evaluation of ML systems. We argue that according to this methodological approach, epistemological issues are <i>instrumental</i> to and <i>autonomous</i> of ethical considerations. This means that the informativeness account considers epistemological evaluation uninfluenced and unregulated by an ethical counterpart. Using an example that does not square well into the <i>informativeness account</i>, we argue for ethical assessments that have a substantial influence on the epistemological assessment of ML and that such influence should not be understood as merely informative but rather regulatory. Drawing on the case analyzed, we claim that within the theoretical framework of the informativeness approach, forms of epistemic injustice—especially <i>epistemic objectification</i>—remain unaddressed. Our analysis should motivate further research investigating the regulatory role that ethical elements play in the epistemology of ML.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"299 - 310"},"PeriodicalIF":2.9,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01875-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140422720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}