{"title":"The ethics of bioinspired animal-robot interaction: A relational meta-ethical approach","authors":"Marco Tamborini","doi":"10.1016/j.jrt.2025.100116","DOIUrl":"10.1016/j.jrt.2025.100116","url":null,"abstract":"<div><div>In this article, I focus on a specific aspect of biorobotics: biohybrid interaction between bioinspired robots and animals. My goal is to analyze the ethical and epistemic implications of this practice, starting with a central question<em>:</em> Is it ethically permissible to have a bioinspired robot that mimics and reproduces the behaviors and/or morphology of an animal interact with a particular population, even if the animals do not know that the object they are interacting with is a robot and not a conspecific? My answer to the ethical question is that the interaction between animals and bioinspired robots is ethically acceptable if the animal actively participates in the language game (sense Coeckelbergh) established with the robot. I proceed as follows: First, I define the field of biorobotics and describe its four macro-categories. Second, I present concrete examples of interactive biorobotics, showing two emblematic cases in which the relationship between bioinspired robots and animals plays a central role. Third, I address one key issue—among many—in applied ethics regarding my ethical question. Fourth, I explore the ethical question on a metaethical level, making use of the theories of David Gunkel and Mark Coeckelbergh, as well as the linguistic approach and ethics of the late Ludwig Wittgenstein. Last, I argue that from a meta-ethical approach the original ethical question turns out to be misplaced. The ethical boundary lies not in the distinction between a real or fake relationship between the robot and the organism, but in the degree of mutual participation and understanding between the entities involved.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100116"},"PeriodicalIF":0.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflexivity and AI start-ups: A collective virtue for dynamic teams","authors":"Marco Innocenti","doi":"10.1016/j.jrt.2025.100115","DOIUrl":"10.1016/j.jrt.2025.100115","url":null,"abstract":"<div><div>This paper investigates the ethical challenges faced by AI-driven start-ups, where the rapid pace of innovation and limited resources often preclude team members from fully understanding the product under development or its societal implications. We propose the concept of “swarm moral reflexivity”, where ethical reflection emerges collectively from the interactions of individuals focused on their specific tasks. Drawing on Swarm Intelligence theories and Alasdair MacIntyre's framework of moral deliberation, this approach enables teams to engage with ethical issues through daily encounters with conflicting responsibilities, rather than relying on top-down value systems or comprehensive ethical oversight. Our model suggests that decentralised, collective moral awareness can effectively support Responsible Innovation in AI start-ups, ensuring that ethical concerns are recognised and addressed throughout the development process, even in fast-paced and resource-constrained environments.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100115"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143685401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beverley Townsend , Katie J. Parnell , Sinem Getir Yaman , Gabriel Nemirovsky , Radu Calinescu
{"title":"Normative conflict resolution through human–autonomous agent interaction","authors":"Beverley Townsend , Katie J. Parnell , Sinem Getir Yaman , Gabriel Nemirovsky , Radu Calinescu","doi":"10.1016/j.jrt.2025.100114","DOIUrl":"10.1016/j.jrt.2025.100114","url":null,"abstract":"<div><div>We have become increasingly reliant on the decision-making capabilities of autonomous agents. These decisions are often executed under non-ideal conditions, offer significant moral risk, and directly affect human well-being. Such decisions may involve the choice to optimise one value over another: promoting safety over human autonomy, or ensuring accuracy over fairness, for example. All too often decision-making of this kind requires a level of normative evaluation involving ethically defensible moral choices and value judgements, compromises, and trade-offs. Guided by normative principles such decisions inform the possible courses of action the agent may take and may even change a set of established actionable courses.</div><div>This paper seeks to map the decision-making processes in normative choice scenarios wherein autonomous agents are intrinsically linked to the decision process. A care-robot is used to illustrate how a normative choice - underpinned by normative principles - arises, where the agent must ‘choose’ an actionable path involving the administration of critical or non-critical medication. Critically, the choice is dependent upon the trade-off involving two normative principles: respect for human autonomy and the prevention of harm. An additional dimension is presented, that of the inclusion of the urgency of the medication to be administered, which further informs and changes the course of action to be followed.</div><div>We offer a means to map decision-making involving a normative choice within a decision ladder using stakeholder input, and, using defeasibility, we show how specification rules with defeaters can be written to operationalise such choice.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100114"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalie Zelenka , Nina H. Di Cara , Euan Bennet , Phil Clatworthy , Huw Day , Ismael Kherroubi Garcia , Susana Roman Garcia , Vanessa Aisyahsari Hanschke , Emma Siân Kuwertz
{"title":"Data Hazards: An open-source vocabulary of ethical hazards for data-intensive projects","authors":"Natalie Zelenka , Nina H. Di Cara , Euan Bennet , Phil Clatworthy , Huw Day , Ismael Kherroubi Garcia , Susana Roman Garcia , Vanessa Aisyahsari Hanschke , Emma Siân Kuwertz","doi":"10.1016/j.jrt.2025.100110","DOIUrl":"10.1016/j.jrt.2025.100110","url":null,"abstract":"<div><div>Understanding the potential for downstream harms from data-intensive technologies requires strong collaboration across disciplines and with the public. Having shared vocabularies of concerns reduces the communication barriers inherent in this work. The Data Hazards project (<span><span>datahazards.com</span><svg><path></path></svg></span>) contains an open-source, controlled vocabulary of 11 hazards associated with data science work, presented as ‘labels’. Each label has (i) an icon, (ii) a description, (iii) examples, and, crucially, (iv) suggested safety precautions. A reflective discussion format and resources have also been developed. These have been created over three years with feedback from interdisciplinary contributors, and their use evaluated by participants (N=47). The labels include concerns often out-of-scope for ethics committees, like environmental impact. The resources can be used as a structure for interdisciplinary harms discovery work, for communicating hazards, collecting public input or in educational settings. Future versions of the project will develop through feedback from open-source contributions, methodological research and outreach.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100110"},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research","authors":"Aive Pevkur , Kadi Lubi","doi":"10.1016/j.jrt.2025.100113","DOIUrl":"10.1016/j.jrt.2025.100113","url":null,"abstract":"<div><div>The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anu Masso , Jevgenia Gerassimenko , Tayfun Kasapoglu , Mai Beilmann
{"title":"Research ethics committees as knowledge gatekeepers: The impact of emerging technologies on social science research","authors":"Anu Masso , Jevgenia Gerassimenko , Tayfun Kasapoglu , Mai Beilmann","doi":"10.1016/j.jrt.2025.100112","DOIUrl":"10.1016/j.jrt.2025.100112","url":null,"abstract":"<div><div>This article investigates the evolution of research ethics within the social sciences, emphasising the shift from procedural norms borrowed from medical and natural sciences to social scientific discipline-specific and method-based principles. This transformation acknowledges the unique challenges and opportunities in social science research, particularly in the context of emerging data technologies such as digital data, algorithms, and artificial intelligence. Our empirical analysis, based on a survey conducted among international social scientists (N = 214), highlights the precariousness researchers face regarding these technological shifts. Traditional methods remain prevalent, despite the recognition of new digital methodologies that necessitate new ethical principles. We discuss the role of ethics committees as influential gatekeepers, examining power dynamics and access to knowledge within the research landscape. The findings underscore the need for tailored ethical guidelines that accommodate diverse methodological approaches, advocate for interdisciplinary dialogue, and address inequalities in knowledge production. This article contributes to the broader understanding of evolving research ethics in an increasingly data-driven world.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward an anthropology of screens. Showing and hiding, exposing and protecting. Mauro Carbone and Graziano Lingua. Translated by Sarah De Sanctis. 2023. Cham: Palgrave Macmillan","authors":"Paul Trauttmansdorff","doi":"10.1016/j.jrt.2025.100111","DOIUrl":"10.1016/j.jrt.2025.100111","url":null,"abstract":"<div><div>Toward an Anthropology of Screens by Mauro Carbone and Graziano Lingua is an insightful book about the cultural and philosophical significance of screens, which highlights their role in mediating human interactions, reshaping relationships with people and artefacts, and raising ethical questions about their pervasive influence in contemporary life.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100111"},"PeriodicalIF":0.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring research practices with non-native English speakers: A reflective case study","authors":"Marilys Galindo, Teresa Solorzano, Julie Neisler","doi":"10.1016/j.jrt.2025.100109","DOIUrl":"10.1016/j.jrt.2025.100109","url":null,"abstract":"<div><div>Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100109"},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Process industry disrupted: AI and the need for human orchestration","authors":"M.W. Vegter , V. Blok , R. Wesselink","doi":"10.1016/j.jrt.2025.100105","DOIUrl":"10.1016/j.jrt.2025.100105","url":null,"abstract":"<div><div>According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue <em>human-centered</em> AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human centred explainable AI decision-making in healthcare","authors":"Catharina M. van Leersum , Clara Maathuis","doi":"10.1016/j.jrt.2025.100108","DOIUrl":"10.1016/j.jrt.2025.100108","url":null,"abstract":"<div><div>Human-centred AI (HCAI<span><span><sup>1</sup></span></span>) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on <em>explainable AI</em> (XAI<span><span><sup>2</sup></span></span>) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.</div><div>A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called <em>human-centred explainable AI</em> (HCXAI<span><span><sup>3</sup></span></span>). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.</div><div>To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}