AI and ethicsPub Date : 2025-01-13DOI: 10.1007/s43681-024-00632-1
Hubert Etienne, Brent Mittelstadt, Rob Reich, John Basl, Jeff Behrends, Dominique Lestel, Chloé Bakalar, Geoff Keeling, Giada Pistilli, Marta Cantero Gamito
{"title":"Exploring the mutations of society in the era of generative AI","authors":"Hubert Etienne, Brent Mittelstadt, Rob Reich, John Basl, Jeff Behrends, Dominique Lestel, Chloé Bakalar, Geoff Keeling, Giada Pistilli, Marta Cantero Gamito","doi":"10.1007/s43681-024-00632-1","DOIUrl":"10.1007/s43681-024-00632-1","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"1 - 1"},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-01-09DOI: 10.1007/s43681-024-00631-2
Madeline G. Reinecke, Andreas Kappes, Sebastian Porsdam Mann, Julian Savulescu, Brian D. Earp
{"title":"The need for an empirical research program regarding human–AI relational norms","authors":"Madeline G. Reinecke, Andreas Kappes, Sebastian Porsdam Mann, Julian Savulescu, Brian D. Earp","doi":"10.1007/s43681-024-00631-2","DOIUrl":"10.1007/s43681-024-00631-2","url":null,"abstract":"<div><p>As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed by different norms: For example, how two strangers—versus two friends or colleagues—should interact when faced with a similar coordination problem often differs. How will the rise of ‘social’ artificial intelligence (and ultimately, superintelligent AI) complicate people’s expectations about the cooperative norms that should govern different types of relationships, whether human–human or human–AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people’s cooperative expectations may pull apart between human–human and human–AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types. We see the data resulting from our proposal as relevant for understanding people’s relationship–specific cooperative expectations in an age of social AI, which may also forecast potential resistance towards AI systems occupying certain social roles. Finally, these data can form the basis for ethical evaluations: What relationship–specific cooperative norms we should adopt for human–AI interactions, or reinforce through responsible AI design, depends partly on empirical facts about what norms people find intuitive for such interactions (along with the costs and benefits of maintaining these). Toward the end of the paper, we discuss how these relational norms may change over time and consider the implications of this for the proposed research program.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"71 - 80"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00631-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-01-09DOI: 10.1007/s43681-024-00629-w
Thomas Souverain
{"title":"AI to renew public employment services? Explanation and trust of domain experts","authors":"Thomas Souverain","doi":"10.1007/s43681-024-00629-w","DOIUrl":"10.1007/s43681-024-00629-w","url":null,"abstract":"<div><p>It is often assumed in explainable AI (XAI) literature that explaining AI predictions will enhance trust of users. To bridge this research gap, we explored trust in XAI on public policies. The French Employment Agency deploys neural networks since 2021 to help job counsellors reject the illegal employment offers. Digging into that case, we adopted philosophical lens on trust in AI which is also compatible with measurements, on demonstrated and perceived trust. We performed a three-months experimental study, joining sociological and psychological methods: Qualitative (S1): Relying on sociological field work methods, we conducted 1 h semi-structured interviews with job counsellors. On 5 regional agencies, we asked 18 counsellors to describe their work practices with AI warnings. Quantitative (S2): Having gathered agents' perceptions, we quantified the reasons to trust AI. We administered a questionnaire, comparing three homogeneous cohorts of 100 counsellors each with different information on AI. We tested the impact of two local XAI, general rule and counterfactual rewording. Our survey provided empirical evidence for the link between XAI and trust, but it also stressed that XAI supports differently appeal to rationality. The rule helps advisors to be sure that criteria motivating AI predictions comply with the law, whereas counterfactual raises doubts on the offer’s quality. Whereas XAI enhanced both demonstrated and perceived trust, our study also revealed limits to full adoption, based on profiles of experts. XAI could more efficiently trigger trust, but only when addressing personal beliefs, or rearranging work conditions to let experts the time to understand AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"55 - 70"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-01-08DOI: 10.1007/s43681-024-00628-x
Wan Rosalili Wan Rosli
{"title":"Waging warfare against states: the deployment of artificial intelligence in cyber espionage","authors":"Wan Rosalili Wan Rosli","doi":"10.1007/s43681-024-00628-x","DOIUrl":"10.1007/s43681-024-00628-x","url":null,"abstract":"<div><p>Cyber espionage has significantly been viewed as a risk towards nation-states, especially in the area of security and protection of Critical National Infrastructures. The race against digitisation has also raised concerns about how emerging technologies are defining how cyber activities are linked to waging warfare between States. Real-world crimes have since found a place in cyberspace, and with high connectivity, has exposed various actors to various risks and vulnerabilities, including cyber espionage. Cyber espionage has always been a national security issue as it does not only target States but also affects public–private networks, corporations and individuals. The challenge of crimes committed within the cyber realm is how the nature of cybercrimes distorts the dichotomy of state responsibility in responding to cyber threats and vulnerabilities. Furthermore, the veil of anonymity and emerging technologies such as artificial intelligence have further provided opportunities for a larger scale impact on the state for such crime. The imminent threat of cyber espionage is impacting the economic and political interactions between nation-states and changing the nature of modern conflict. Due to these implications, this paper will discuss the current legal landscape governing cyber espionage and the impact of the use of artificial intelligence in the commission of such crimes.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"47 - 53"},"PeriodicalIF":0.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00628-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2024-12-21DOI: 10.1007/s43681-024-00625-0
Kevin Mills
{"title":"Technology, liberty, and guardrails","authors":"Kevin Mills","doi":"10.1007/s43681-024-00625-0","DOIUrl":"10.1007/s43681-024-00625-0","url":null,"abstract":"<div><p>Technology companies are increasingly being asked to take responsibility for the technologies they create. Many of them are rising to the challenge. One way they do this is by implementing “guardrails”: restrictions on functionality that prevent people from misusing their technologies (per some standard of misuse). While there can be excellent reasons for implementing guardrails (and doing so is sometimes morally obligatory), I argue that the unrestricted authority to implement guardrails is incompatible with proper respect for user freedom, and is not something we should welcome. I argue instead that guardrails should be implemented for only two reasons: to prevent accidental misuse of the technology, and as a proportionate means of preventing people from using the technology to violate other people’s rights. If I’m right, then we may have to get more comfortable with developers releasing technologies that can, and to some extent inevitably will, be misused; people using technologies in ways we disagree with is one of the costs of liberty, but it is a cost we have excellent reasons to bear.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"39 - 46"},"PeriodicalIF":0.0,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2024-12-10DOI: 10.1007/s43681-024-00623-2
Shana Kleiner, Jessica A. Grieser, Shug Miller, James Shepard, Javier Garcia-Perez, Nick Deas, Desmond U. Patton, Elsbeth Turcan, Kathleen McKeown
{"title":"Unmasking camouflage: exploring the challenges of large language models in deciphering African American language & online performativity","authors":"Shana Kleiner, Jessica A. Grieser, Shug Miller, James Shepard, Javier Garcia-Perez, Nick Deas, Desmond U. Patton, Elsbeth Turcan, Kathleen McKeown","doi":"10.1007/s43681-024-00623-2","DOIUrl":"10.1007/s43681-024-00623-2","url":null,"abstract":"<div><p>The growing accessibility of large language models (LLMs) has raised many questions about the reliability of probabilistically generated natural language responses. While researchers have documented how bias in the training data leads to biased and ethically problematic output, little attention has been paid to the problems which arise from the nature of the varieties of language on which these models are trained. In particular, certain kinds of expressive and performative language use are more common among African American social media users than they occur in the naturalistic speech of African Americans, a discrepancy which models may fail to take into account when they are training on easily-scraped data as being representative of African American speech. Because LLM training data is generally proprietary, in this work we simulate the training data using a collected dataset consisting of 274 posts from Twitter, Reddit, and Hip-Hop lyrics and analyze how LLMs interpreted their meaning. We highlight the difficulties LLMs, including GPT-3 and GPT-4, have in understanding performative AAL and examine how camouflaging and performativity are addressed (or not) by LLMs and demonstrate the harmful implications of misinterpreting online performance.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"29 - 37"},"PeriodicalIF":0.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00623-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2024-12-04DOI: 10.1007/s43681-024-00626-z
Florian Richter
{"title":"From human-system interaction to human-system co-action and back: ethical assessment of generative AI and mutual theory of mind","authors":"Florian Richter","doi":"10.1007/s43681-024-00626-z","DOIUrl":"10.1007/s43681-024-00626-z","url":null,"abstract":"<div><p>Human-machine ethics has emerged as a rapidly growing research field in recent years. However, it seems that Generative Artificial Intelligence (AI) leads to a paradigm shift from human-machine interaction to co-action. The ethical assessment of such relationships is still in the making and needs further scrutiny. First, studies about the influence of technology in human-system interactions and manipulation are reviewed. Second, the “mutual theory of mind” approach is critically examined to identify its shortcomings. Third, creating user models is reconstruced to demonstrate the strategies of systems. Finally, use cases are discussed and assessed to outline ethical implications.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"19 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00626-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2024-12-02DOI: 10.1007/s43681-024-00624-1
Linus Ta-Lun Huang, Gleb Papyshev, James K. Wong
{"title":"Democratizing value alignment: from authoritarian to democratic AI ethics","authors":"Linus Ta-Lun Huang, Gleb Papyshev, James K. Wong","doi":"10.1007/s43681-024-00624-1","DOIUrl":"10.1007/s43681-024-00624-1","url":null,"abstract":"<div><p>Value alignment is essential for ensuring that AI systems act in ways that are consistent with human values. Existing approaches, such as reinforcement learning with human feedback and constitutional AI, however, exhibit power asymmetries and lack transparency. These “authoritarian” approaches fail to adequately accommodate a broad array of human opinions, raising concerns about whose values are being prioritized. In response, we introduce the Dynamic Value Alignment approach, theoretically grounded in the principles of parallel constraint satisfaction, which models moral reasoning as a dynamic process that balances multiple value principles. Our approach also enhances users’ moral and epistemic agency by granting users greater control over the values that influence AI behavior. As a more user-centric, transparent, and participatory framework for AI ethics, our approach not only addresses the democratic deficits inherent in current practices but also ensures that AI systems are flexibly aligned with a diverse array of human values.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"11 - 18"},"PeriodicalIF":0.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00624-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2024-11-27DOI: 10.1007/s43681-024-00627-y
Ivan Mladenović
{"title":"The prospects for digital democracy","authors":"Ivan Mladenović","doi":"10.1007/s43681-024-00627-y","DOIUrl":"10.1007/s43681-024-00627-y","url":null,"abstract":"<div><p>This paper aims to answer a basic question: is it possible to forge democratic citizenship through various online tools that are already available? To answer this question, I introduce the conception of <i>digital political identities</i>, i.e., the ways in which online environments contribute to creating, maintaining, and changing political identities. Because the well-functioning of democracy rests on citizens with the ability to make informed decisions, vote, and engage in public deliberation, this paper is looking for new and innovative online tools for participating in meaningful online deliberation, acquiring accurate information in the digital space, and making informed voting decisions. By introducing the conception of digital political identities and linking it to online tools that can improve democracy and citizen engagement, I aim to make further progress in cutting edge research on the relationship between digital technologies and democracy. In a nutshell, I am mainly concerned with proposing and defending a normative framework for the use of various online tools that could foster digital democracy.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"3 - 9"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2024-08-19DOI: 10.1007/s43681-024-00551-1
David De Cremer
{"title":"Opinion piece: on the ethics of a pending AI crisis in business","authors":"David De Cremer","doi":"10.1007/s43681-024-00551-1","DOIUrl":"10.1007/s43681-024-00551-1","url":null,"abstract":"<div><p>Because of a fear of missing out, organizations today rush out to adopt AI while not understanding what the technology stands for and how to deploy it most effectively. Trusting blindly the promises of AI as the ultimate value-creator, business leaders are unclear about their roles in making AI work for the organization and therefore delegate responsibility of the adoption process entirely to tech experts. In this opinion paper, I argue that this situation breeds fertile ground for a pending AI crisis as organizations will fail to align AI deployment with organizational purpose and in doing so fail to put AI to use in socially responsible and ethical ways. As a result, no real gains are achieved when adopting AI while threats and potential harm to society and humanity in general are fostered.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"101 - 104"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}