Rockwell F Clancy, Qin Zhu, Scott Streiner, Andrea Gammon, Ryan Thorpe
{"title":"Towards a Psychologically Realist, Culturally Responsive Approach to Engineering Ethics in Global Contexts.","authors":"Rockwell F Clancy, Qin Zhu, Scott Streiner, Andrea Gammon, Ryan Thorpe","doi":"10.1007/s11948-025-00536-1","DOIUrl":"10.1007/s11948-025-00536-1","url":null,"abstract":"<p><p>This paper describes the motivations and some directions for bringing insights and methods from moral and cultural psychology to bear on how engineering ethics is conceived, taught, and assessed. Therefore, the audience for this paper is not only engineering ethics educators and researchers but also administrators and organizations concerned with ethical behaviors. Engineering ethics has typically been conceived and taught as a branch of professional and applied ethics with pedagogical aims, where students and practitioners learn about professional codes and/or Western ethical theories and then apply these resources to address issues presented in case studies about engineering and/or technology. As a result, accreditation and professional bodies have generally adopted ethical reasoning skills and/or moral knowledge as learning outcomes. However, this paper argues that such frameworks are psychologically \"irrealist\" and culturally biased: it is not clear that ethical judgments or behaviors are primarily the result of applying principles, or that ethical concerns captured in professional codes or Western ethical theories do or should reflect the engineering ethical concerns of global populations. Individuals from Western educated industrialized rich democratic cultures are outliers on various psychological and social constructs, including self-concepts, thought styles, and ethical concerns. However, engineering is more cross cultural and international than ever before, with engineers and technologies spanning multiple cultures and countries. For instance, different national regulations and cultural values can come into conflict while performing engineering work. Additionally, ethical judgments may also result from intuitions, closer to emotions than reflective thought, and behaviors can be affected by unconscious, social, and environmental factors. To address these issues, this paper surveys work in engineering ethics education and assessment to date, shortcomings within these approaches, and how insights and methods from moral and cultural psychology could be used to improve engineering ethics education and assessment, making them more culturally responsive and psychologically realist at the same time.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 2","pages":"10"},"PeriodicalIF":2.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11961465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artefacts of Change: The Disruptive Nature of Humanoid Robots Beyond Classificatory Concerns.","authors":"Cindy Friedman","doi":"10.1007/s11948-025-00532-5","DOIUrl":"10.1007/s11948-025-00532-5","url":null,"abstract":"<p><p>One characteristic of socially disruptive technologies is that they have the potential to cause uncertainty about the application conditions of a concept i.e., they are conceptually disruptive. Humanoid robots have done just this, as evidenced by discussions about whether, and under what conditions, humanoid robots could be classified as, for example, moral agents, moral patients, or legal and/or moral persons. This paper frames the disruptive effect of humanoid robots differently by taking the discussion beyond that of classificatory concerns. It does so by showing that humanoid robots are socially disruptive because they also transform how we experience and understand the world. Through inviting us to relate to a technological artefact as if it is human, humanoid robots have a profound impact upon the way in which we relate to different elements of our world. Specifically, I focus on three types of human relational experiences, and how the norms that surround them may be transformed by humanoid robots: (1) human-technology relations; (2) human-human relations; and (3) human-self relations. Anticipating the ways in which humanoid robots may change society is important given that once a technology is entrenched, it is difficult to counteract negative impacts. Therefore, we should try to anticipate them while we can still do something to prevent them. Since humanoid robots are currently relatively rudimentary, yet there is incentive to invest more in their development, it is now a good time to think carefully about how this technology may impact us.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 2","pages":"9"},"PeriodicalIF":2.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11953219/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika
{"title":"Correction: Discussions on Human Enhancement Meet Science: A Quantitative Analysis.","authors":"Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika","doi":"10.1007/s11948-025-00537-0","DOIUrl":"10.1007/s11948-025-00537-0","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 2","pages":"8"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11919922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI Ethics beyond Principles: Strengthening the Life-world Perspective.","authors":"Stefan Heuser, Jochen Steil, Sabine Salloch","doi":"10.1007/s11948-025-00530-7","DOIUrl":"10.1007/s11948-025-00530-7","url":null,"abstract":"<p><p>The search for ethical guidance in the development of artificial intelligence (AI) systems, especially in healthcare and decision support, remains a crucial effort. So far, principles usually serve as the main reference points to achieve ethically correct implementations. Based on reviewing classical criticism of principle-based ethics and taking into account the severity and potentially life-changing relevance of decisions assisted by AI-driven systems, we argue for strengthening a complementary perspective that focuses on the life-world as ensembles of practices which shape people's lives. This perspective focuses on the notion of ethical judgment sensitive to life forms, arguing that principles alone do not guarantee ethicality in a moral world that is rather a joint construction of reality than a matter of mere control. We conclude that it is essential to support and supplement the implementation of moral principles in the development of AI systems for decision-making in healthcare by recognizing the normative relevance of life forms and practices in ethical judgment.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"7"},"PeriodicalIF":2.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11811459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika
{"title":"Discussions on Human Enhancement Meet Science: A Quantitative Analysis.","authors":"Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika","doi":"10.1007/s11948-025-00531-6","DOIUrl":"10.1007/s11948-025-00531-6","url":null,"abstract":"<p><p>The analysis of citation flow from a collection of scholarly articles might provide valuable insights into their thematic focus and the genealogy of their main concepts. In this study, we employ a topic model to delineate a subcorpus of 1,360 papers representative of bioethical discussions on enhancing human life. We subsequently conduct an analysis of almost 11,000 references cited in that subcorpus to examine quantitatively, from a bird's-eye view, the degree of openness of this part of scholarship to the specialized knowledge produced in biosciences. Although almost half of the analyzed references point to journals classified as Natural Science and Engineering (NSE), we do not find strong evidence of the intellectual influence of recent discoveries in biosciences on discussions on human enhancement. We conclude that a large part of the discourse surrounding human enhancement is inflected with \"science-fictional habits of mind.\" Our findings point to the need for a more science-informed approach in discussions on enhancing human life.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"6"},"PeriodicalIF":2.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11799069/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Moral Complexity in Traffic: Advancing the ADC Model for Automated Driving Systems.","authors":"Dario Cecchini, Veljko Dubljević","doi":"10.1007/s11948-025-00528-1","DOIUrl":"10.1007/s11948-025-00528-1","url":null,"abstract":"<p><p>The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders' trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment. Specifically, we identify three main components of traffic moral judgment: driving style, traffic rules compliance, and risk distribution. Then, we suggest distinguishable ethical settings for each traffic component.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"5"},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11761772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143034715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LLMs, Truth, and Democracy: An Overview of Risks.","authors":"Mark Coeckelbergh","doi":"10.1007/s11948-025-00529-0","DOIUrl":"10.1007/s11948-025-00529-0","url":null,"abstract":"<p><p>While there are many public concerns about the impact of AI on truth and knowledge, especially when it comes to the widespread use of LLMs, there is not much systematic philosophical analysis of these problems and their political implications. This paper aims to assist this effort by providing an overview of some truth-related risks in which LLMs may play a role, including risks concerning hallucination and misinformation, epistemic agency and epistemic bubbles, bullshit and relativism, and epistemic anachronism and epistemic incest, and by offering arguments for why these problems are not only epistemic issues but also raise problems for democracy since they undermine its epistemic basis- especially if we assume democracy theories that go beyond minimalist views. I end with a short reflection on what can be done about these political-epistemic risks, pointing to education as one of the sites for change.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"4"},"PeriodicalIF":2.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759458/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theresa Willem, Marie-Christine Fritzsche, Bettina M Zimmermann, Anna Sierawska, Svenja Breuer, Maximilian Braun, Anja K Ruess, Marieke Bak, Franziska B Schönweitz, Lukas J Meier, Amelia Fiske, Daniel Tigard, Ruth Müller, Stuart McLennan, Alena Buyx
{"title":"Embedded Ethics in Practice: A Toolbox for Integrating the Analysis of Ethical and Social Issues into Healthcare AI Research.","authors":"Theresa Willem, Marie-Christine Fritzsche, Bettina M Zimmermann, Anna Sierawska, Svenja Breuer, Maximilian Braun, Anja K Ruess, Marieke Bak, Franziska B Schönweitz, Lukas J Meier, Amelia Fiske, Daniel Tigard, Ruth Müller, Stuart McLennan, Alena Buyx","doi":"10.1007/s11948-024-00523-y","DOIUrl":"10.1007/s11948-024-00523-y","url":null,"abstract":"<p><p>Integrating artificial intelligence (AI) into critical domains such as healthcare holds immense promise. Nevertheless, significant challenges must be addressed to avoid harm, promote the well-being of individuals and societies, and ensure ethically sound and socially just technology development. Innovative approaches like Embedded Ethics, which refers to integrating ethics and social science into technology development based on interdisciplinary collaboration, are emerging to address issues of bias, transparency, misrepresentation, and more. This paper aims to develop this approach further to enable future projects to effectively deploy it. Based on the practical experience of using ethics and social science methodology in interdisciplinary AI-related healthcare consortia, this paper presents several methods that have proven helpful for embedding ethical and social science analysis and inquiry. They include (1) stakeholder analyses, (2) literature reviews, (3) ethnographic approaches, (4) peer-to-peer interviews, (5) focus groups, (6) interviews with affected groups and external stakeholders, (7) bias analyses, (8) workshops, and (9) interdisciplinary results dissemination. We believe that applying Embedded Ethics offers a pathway to stimulate reflexivity, proactively anticipate social and ethical concerns, and foster interdisciplinary inquiry into such concerns at every stage of technology development. This approach can help shape responsible, inclusive, and ethically aware technology innovation in healthcare and beyond.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"3"},"PeriodicalIF":2.7,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koji Ota, Tetsushi Tanibe, Takumi Watanabe, Kazuki Iijima, Mineki Oguchi
{"title":"Moral Intuition Regarding the Possibility of Conscious Human Brain Organoids: An Experimental Ethics Study.","authors":"Koji Ota, Tetsushi Tanibe, Takumi Watanabe, Kazuki Iijima, Mineki Oguchi","doi":"10.1007/s11948-024-00525-w","DOIUrl":"10.1007/s11948-024-00525-w","url":null,"abstract":"<p><p>The moral status of human brain organoids (HBOs) has been debated in view of the future possibility that they may acquire phenomenal consciousness. This study empirically investigates the moral sensitivity in people's intuitive judgments about actions toward conscious HBOs. The results showed that the presence/absence of pain experience in HBOs affected the judgment about the moral permissibility of actions such as creating and destroying the HBOs; however, the presence/absence of visual experience in HBOs also affected the judgment. These findings suggest that people's intuitive judgments about the moral status of HBOs are sensitive to the valence-independent value of phenomenal consciousness. We discuss how these observations can have normative implications; particularly, we argue that they put pressure on the theoretical view that the moral status of conscious HBOs is grounded solely in the valence-dependent value of consciousness. We also discuss how our findings can be informative even when such a theoretical view is finally justified or when the future possibility of conscious HBOs is implausible.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"2"},"PeriodicalIF":2.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11659373/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriela Arriagada-Bruneau, Claudia López, Alexandra Davidoff
{"title":"A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers.","authors":"Gabriela Arriagada-Bruneau, Claudia López, Alexandra Davidoff","doi":"10.1007/s11948-024-00526-9","DOIUrl":"10.1007/s11948-024-00526-9","url":null,"abstract":"<p><p>We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the \"isolationist approach to AI bias,\" a trend in AI literature where biases are seen as separate occurrences linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the adoption of an uncritical approach to understanding the influence of biases in developers' decision-making. The BNA fosters dialogue and a critical stance among developers, guided by external experts, using graphical representations to depict biased connections. To test the BNA, we conducted a pilot case study on the \"waiting list\" project, involving a small AI developer team creating a healthcare waiting list NPL model in Chile. The analysis showed promising findings: (i) the BNA aids in visualizing interconnected biases and their impacts, facilitating ethical reflection in a more accessible way; (ii) it promotes transparency in decision-making throughout AI development; and (iii) more focus is necessary on professional biases and material limitations as sources of bias in AI development.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"1"},"PeriodicalIF":2.7,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142840116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}