AI & SocietyPub Date : 2024-01-12DOI: 10.1007/s00146-023-01835-6
Joshua Shepherd
{"title":"Sentience, Vulcans, and zombies: the value of phenomenal consciousness","authors":"Joshua Shepherd","doi":"10.1007/s00146-023-01835-6","DOIUrl":"10.1007/s00146-023-01835-6","url":null,"abstract":"<div><p>Many think that a specific aspect of phenomenal consciousness—valenced or affective experience—is essential to consciousness’s moral significance (valence sentientism). They hold that valenced experience is necessary for well-being, or moral status, or psychological intrinsic value (or all three). Some think that phenomenal consciousness generally is necessary for non-derivative moral significance (broad sentientism). Few think that consciousness is unnecessary for moral significance (non-necessitarianism). In this paper, I consider the prospects for these views. I first consider the prospects for valence sentientism in light of Vulcans, beings who are conscious but without affect or valence of any sort. I think Vulcans pressure us to accept broad sentientism. But I argue that a consideration of explanations for broad sentientism opens up possible explanations for non-necessitarianism about the moral significance of consciousness. That is, once one leans away from valence sentientism because of Vulcans, one should feel pressure to accept a view on which consciousness is not necessary for well-being, moral status, or psychological intrinsic value.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3005 - 3015"},"PeriodicalIF":2.9,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01835-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139532292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-01-12DOI: 10.1007/s00146-023-01834-7
Marc M. Anderson, Karën Fort
{"title":"Evaluating the acceptability of ethical recommendations in industry 4.0: an ethics by design approach","authors":"Marc M. Anderson, Karën Fort","doi":"10.1007/s00146-023-01834-7","DOIUrl":"10.1007/s00146-023-01834-7","url":null,"abstract":"<div><p>In this paper, we present the methodology we used in the European Horizon 2020 AI-PROFICIENT project, to evaluate the implementation of the ethical component of the project. The project is a 3-year collaboration between a university partner and industrial and tech partners, which aims to research the integration of AI services in heavy industry work settings. An AI ethics approach developed for the project has involved embedded ethical analysis of work contexts and design solutions and the generation of specific and evolving ethical recommendations for partners. We have performed an ongoing evaluation and monitoring of the implementation of recommendations. We describe the quantitative results of these implementations: overall, broken down by category, and broken down by category and responsible project partner (anonymized). In parallel, we discuss the results in light of our approach and offer insights for future research into the ground-level application of ethical recommendations for AI in heavy industry.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2989 - 3003"},"PeriodicalIF":2.9,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139532695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-01-07DOI: 10.1007/s00146-023-01825-8
Patrik Hummel, Matthias Braun, Serena Bischoff, David Samhammer, Katharina Seitz, Peter A. Fasching, Peter Dabrock
{"title":"Perspectives of patients and clinicians on big data and AI in health: a comparative empirical investigation","authors":"Patrik Hummel, Matthias Braun, Serena Bischoff, David Samhammer, Katharina Seitz, Peter A. Fasching, Peter Dabrock","doi":"10.1007/s00146-023-01825-8","DOIUrl":"10.1007/s00146-023-01825-8","url":null,"abstract":"<div><h3>Background</h3><p>Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders.</p><h3>Methods</h3><p>To explore this issue in a multi-faceted manner, we conducted semi-structured interviews as well as focus group discussions with patients and clinicians. These empirical methods were used to gather interviewee’s views on the opportunities and challenges of medical AI and other data-intensive applications.</p><h3>Results</h3><p>Different clinician and patient groups are exposed to medical AI to differing degrees. Interviewees expect and demand that the purposes of data processing accord with patient preferences, and that data are put to effective use to generate social value. One central result is the shared tendency of clinicians and patients to maintain individualistic ascriptions of responsibility for clinical outcomes.</p><h3>Conclusions</h3><p>Medical AI and the proliferation of data with import for health-related inferences shape and partially reconfigure stakeholder expectations of how these technologies relate to the decision-making of human agents. Intuitions about individual responsibility for clinical outcomes could eventually be disrupted by the increasing sophistication of data-intensive and AI-driven clinical tools. Besides individual responsibility, systemic governance will be key to promote alignment with stakeholder expectations in AI-driven and data-intensive health settings.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2973 - 2987"},"PeriodicalIF":2.9,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01825-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139449193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-01-05DOI: 10.1007/s00146-023-01826-7
Spyridon Samothrakis
{"title":"Artificial intelligence and modern planned economies: a discussion on methods and institutions","authors":"Spyridon Samothrakis","doi":"10.1007/s00146-023-01826-7","DOIUrl":"10.1007/s00146-023-01826-7","url":null,"abstract":"<div><p>Interest in computerised central economic planning (CCEP) has seen a resurgence, as there is strong demand for an alternative vision to modern free (or not so free) market liberal capitalism. Given the close links of CCEP with what we would now broadly call artificial intelligence (AI)—e.g. optimisation, game theory, function approximation, machine learning, automated reasoning—it is reasonable to draw direct analogues and perform an analysis that would help identify what commodities and institutions we should see for a CCEP programme to become successful. Following this analysis, we conclude that a CCEP economy would need to have a very different outlook from current market practices, with a focus on producing basic “interlinking” commodities (e.g. tools, processed materials, instruction videos) that consumers can use as a form of collective R &D. On an institutional level, CCEP should strive for the release of basic commodities that empower consumers by having as many alternative uses as possible, but also making sure that a baseline of basic necessities is widely available.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2961 - 2972"},"PeriodicalIF":2.9,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01826-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139382702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-01-05DOI: 10.1007/s00146-023-01818-7
Margot J. van der Goot, Nathalie Koubayová, Eva A. van Reijmersdal
{"title":"Understanding users’ responses to disclosed vs. undisclosed customer service chatbots: a mixed methods study","authors":"Margot J. van der Goot, Nathalie Koubayová, Eva A. van Reijmersdal","doi":"10.1007/s00146-023-01818-7","DOIUrl":"10.1007/s00146-023-01818-7","url":null,"abstract":"<div><p>Due to huge advancements in natural language processing (NLP) and machine learning, chatbots are gaining significance in the field of customer service. For users, it may be hard to distinguish whether they are communicating with a human or a chatbot. This brings ethical issues, as users have the right to know who or what they are interacting with (European Commission in Regulatory framework proposal on artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, 2022). One of the solutions is to include a disclosure at the start of the interaction (e.g., “this is a chatbot”). However, companies are reluctant to use disclosures, as consumers may perceive artificial agents as less knowledgeable and empathetic than their human counterparts (Luo et al. in Market Sci 38(6):937–947, 2019). The current mixed methods study, combining qualitative interviews (<i>n</i> = 8) and a quantitative experiment (<i>n</i> = 194), delves into users’ responses to a disclosed vs. undisclosed customer service chatbot, focusing on source orientation, anthropomorphism, and social presence. The qualitative interviews reveal that it is the willingness to help the customer and the friendly tone of voice that matters to the users, regardless of the artificial status of the customer care representative. The experiment did not show significant effects of the disclosure (vs. non-disclosure). Implications for research, legislators and businesses are discussed.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2947 - 2960"},"PeriodicalIF":2.9,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01818-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139380962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-01-03DOI: 10.1007/s00146-023-01830-x
Quan-Hoang Vuong, Manh-Tung Ho
{"title":"Escape climate apathy by harnessing the power of generative AI","authors":"Quan-Hoang Vuong, Manh-Tung Ho","doi":"10.1007/s00146-023-01830-x","DOIUrl":"10.1007/s00146-023-01830-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3057 - 3058"},"PeriodicalIF":2.9,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139388200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2024-01-03DOI: 10.1007/s00146-023-01827-6
Katalin Feher, Lilla Vicsek, Mark Deuze
{"title":"Modeling AI Trust for 2050: perspectives from media and info-communication experts","authors":"Katalin Feher, Lilla Vicsek, Mark Deuze","doi":"10.1007/s00146-023-01827-6","DOIUrl":"10.1007/s00146-023-01827-6","url":null,"abstract":"<div><p>The study explores the future of AI-driven media and info-communication as envisioned by experts from all world regions, defining relevant terminology and expectations for 2050. Participants engaged in a 4-week series of surveys, questioning their definitions and projections about AI for the field of media and communication. Their expectations predict universal access to democratically available, automated, personalized and unbiased information determined by trusted narratives, recolonization of information technology and the demystification of the media process. These experts, as technology ambassadors, advocate AI-to-AI solutions to mitigate technology-driven misuse and misinformation. The optimistic scenarios shift responsibility to future generations, relying on AI-driven solutions and finding inspiration in nature. Their present-based forecasts could be construed as being indicative of professional near-sightedness and cognitive dissonance. Visualizing our findings into a Glasses Model of AI Trust, the study contributes to key debates regarding AI policy, developmental trajectories, and academic research in media and info-communication fields.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2933 - 2946"},"PeriodicalIF":2.9,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01827-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-11-22DOI: 10.1007/s00146-023-01808-9
Hartmut Hirsch-Kreinsen, Thorben Krokowski
{"title":"Trustworthy AI: AI made in Germany and Europe?","authors":"Hartmut Hirsch-Kreinsen, Thorben Krokowski","doi":"10.1007/s00146-023-01808-9","DOIUrl":"10.1007/s00146-023-01808-9","url":null,"abstract":"<div><p>As the capabilities of artificial intelligence (AI) continue to expand, concerns are also growing about the ethical and social consequences of unregulated development and, above all, use of AI systems in a wide range of social areas. It is therefore indisputable that the application of AI requires social standardization and regulation. For years, innovation policy measures and the most diverse activities of European and German institutions have been directed toward this goal. Under the label “Trustworthy AI” (TAI), a promise is formulated, according to which AI can meet criteria of transparency, legality, privacy, non-discrimination, and reliability. In this article, we ask what significance and scope the politically initiated concepts of TAI occupy in the current process of AI dynamics and to what extent they can stand for an independent, unique European or German development path of this technology.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2921 - 2931"},"PeriodicalIF":2.9,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01808-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139249361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-11-21DOI: 10.1007/s00146-023-01793-z
Clemens Eisenmann, Jakub Mlynář, Jason Turowetz, Anne W. Rawls
{"title":"“Machine Down”: making sense of human–computer interaction—Garfinkel’s research on ELIZA and LYRIC from 1967 to 1969 and its contemporary relevance","authors":"Clemens Eisenmann, Jakub Mlynář, Jason Turowetz, Anne W. Rawls","doi":"10.1007/s00146-023-01793-z","DOIUrl":"10.1007/s00146-023-01793-z","url":null,"abstract":"<div><p>This paper examines Harold Garfinkel’s work with ELIZA and a related program LYRIC from 1967 to 1969. AI researchers have tended to treat successful human–machine interaction as if it relied primarily on non-human machine characteristics, and thus the often-reported attribution of human-like qualities to communication with computers has been criticized as a misperception—and humans who make such reports referred to as “deluded.” By contrast Garfinkel, building on two decades of prior research on information and communication, argued that the ELIZA and the LYRIC “chatbots” were achieving interactions that felt human to many users by exploiting human sense-making practices. In keeping with his long-term practice of using “trouble” as a way of discovering the taken-for-granted practices of human sense-making, Garfinkel designed scripts for ELIZA and LYRIC that he could disrupt in order to reveal how their success depended on human social practices. Hence, the announcement “Machine Down” by the chatbot was a desired result of Garfinkel’s interactions with it. This early (but largely unknown) research has implications not only for understanding contemporary AI chatbots, but also opens possibilities for respecifying current information systems design and computational practices to provide for the design of more flexible information objects.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2715 - 2733"},"PeriodicalIF":2.9,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01793-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139252865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-11-15DOI: 10.1007/s00146-023-01805-y
Paula Sweeney
{"title":"Could the destruction of a beloved robot be considered a hate crime? An exploration of the legal and social significance of robot love","authors":"Paula Sweeney","doi":"10.1007/s00146-023-01805-y","DOIUrl":"10.1007/s00146-023-01805-y","url":null,"abstract":"<div><p>In the future, it is likely that we will form strong bonds of attachment and even develop love for social robots. Some of these loving relations will be, from the human’s perspective, as significant as a loving relationship that they might have had with another human. This means that, from the perspective of the loving human, the mindless destruction of their robot partner could be as devastating as the murder of another’s human partner. Yet, the loving partner of a robot has no recourse to legal action beyond the destruction of property and can see no way to prevent future people suffering the same devastating loss. On this basis, some have argued that such a scenario must surely motivate legal protection for social robots. In this paper, I argue that despite the devastating loss that would come from the destruction of one’s robot partner, love cannot itself be a reason for granting robot rights. However, although I argue against beloved robots having protective rights, I argue that the loss of a robot partner must be socially recognised as a form of bereavement if further secondary harms are to be avoided, and that, if certain conditions obtain, the destruction of a beloved robot could be criminalised as a hate crime.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2735 - 2741"},"PeriodicalIF":2.9,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01805-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139271153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}