Else Giesbers , Kelly Rijswijk , Mark Ryan , Mashiat Hossain , Aneesh Chauhan
{"title":"A robot with human values: assessing value-sensitive design in an agri-food context","authors":"Else Giesbers , Kelly Rijswijk , Mark Ryan , Mashiat Hossain , Aneesh Chauhan","doi":"10.1016/j.jrt.2025.100120","DOIUrl":"10.1016/j.jrt.2025.100120","url":null,"abstract":"<div><div>Value Sensitive Design (VSD) aims to take societal values on board in the design of innovative technologies. While a lot has been written on VSD and the added value of using it for technology development, limited literature is available on its application to the agri-food sector. This article describes a VSD case study on an agri-food robotic system and reflects on the insights into the added value of using VSD. This paper concludes that while VSD contributes to broadening the perspective of technical researchers about non-technical requirements, its application in this case is constrained by five factors related to the nature of the VSD approach: i) lack of clarity on dealing with conflicting values; ii) the ideal timing of VSD is unclear; iii) VSD lacks effectiveness when technology is outsourced; iv) VSD does not account for time and context specificness of values; and v) the operationalisation of values in VSD.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100120"},"PeriodicalIF":0.0,"publicationDate":"2025-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dorothee Horstkötter , Mariël Kanne , Simona Karbouniaris , Noussair Lazrak , Maria Bulgheroni , Ella Sheltawy , Laura Giani , Margherita La Gamba , Esmeralda Ruiz Pujadas , Marina Camacho , Finty Royle , Irene Baggetto , Sinan Gülöksüz , Bart Rutten , Jim van Os
{"title":"Decision-making on an AI-supported youth mental health app: A multilogue among ethicists, social scientists, AI-researchers, biomedical engineers, young experiential experts, and psychiatrists","authors":"Dorothee Horstkötter , Mariël Kanne , Simona Karbouniaris , Noussair Lazrak , Maria Bulgheroni , Ella Sheltawy , Laura Giani , Margherita La Gamba , Esmeralda Ruiz Pujadas , Marina Camacho , Finty Royle , Irene Baggetto , Sinan Gülöksüz , Bart Rutten , Jim van Os","doi":"10.1016/j.jrt.2025.100119","DOIUrl":"10.1016/j.jrt.2025.100119","url":null,"abstract":"<div><div>This article explores the decision-making processes in the ongoing development of an AI-supported youth mental health app. Document analysis reveals decisions taken during the grant proposal and funding phase and reflects upon reasons <em>why</em> AI is incorporated in innovative youth mental health care. An innovative multilogue among the transdisciplinary team of researchers, covering ethicists, social scientists, AI-experts, biomedical engineers, young experts by experience, and psychiatrists points out <em>which</em> decisions are taken <em>how</em>. This covers i) the role of a biomedical and exposomic understanding of psychiatry as compared to a phenomenological and experiential perspective, ii) the impact and limits of AI-co-creation by young experts by experience and mental health experts, and iii) the different perspectives regarding the impact of AI on autonomy, empowerment and human relationships. The multilogue does not merely highlight different steps taken during human decision-making in AI-development, it also raises awareness about the many complexities, and sometimes contradictions, when engaging in transdisciplinary work, and it points towards ethical challenges of digitalized youth mental health care.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100119"},"PeriodicalIF":0.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143867928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Responsible AI innovation in the public sector: Lessons from and recommendations for facilitating Fundamental Rights and Algorithms Impact Assessments","authors":"I.M. Muis, J. Straatman, B.A. Kamphorst","doi":"10.1016/j.jrt.2025.100118","DOIUrl":"10.1016/j.jrt.2025.100118","url":null,"abstract":"<div><div>Since the initial development of the Fundamental Rights and Algorithms Impact Assessment (FRAIA) in 2021, there has been an increasing interest from public sector organizations to gain experience with performing a FRAIA in contexts of developing, procuring, and deploying AI systems. In this contribution, we share observations from fifteen FRAIA trajectories performed in the field within the Dutch public sector context. Based on our experiences facilitating these trajectories, we offer a set of recommendations directed at practitioners with the aim of helping organizations make the best use of FRAIA and similar impact assessment instruments. We conclude by calling for the development of an informal FRAIA community in which practical handholds and advice can be shared to promote responsible AI innovation by ensuring that the human decision making around AI and other algorithms is well informed and well documented with respect to the protection of fundamental rights.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100118"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Miguel Frazão Dias Ferreira , António GRILO , Maria MAIA
{"title":"Piloting a maturity model for responsible artificial intelligence: A portuguese case study","authors":"Rui Miguel Frazão Dias Ferreira , António GRILO , Maria MAIA","doi":"10.1016/j.jrt.2025.100117","DOIUrl":"10.1016/j.jrt.2025.100117","url":null,"abstract":"<div><div>Recently, frameworks and guidelines aiming to assist trustworthiness in organizations and assess ethical issues related to the development and use of Artificial Intelligence (AI) have been translated into self-assessment checklists and other instruments. However, such tools can be very time consuming to apply. Aiming to develop a more practical tool, an Industry-Wide Maturity Model for Responsible AI was piloted in 3 companies and 2 research centres, in Portugal. Results show that organizations are aware of requirements (44 %) to deploy a responsible AI approach and have a reactive response to its implementation, as they are willing to integrate other requirements (33 %) into their business processes. The proposed Model was welcomed and showed openness from companies to consistently use it, since it helped to identify gaps and needs when it comes to foster a more trustworthy approach to the development and deployment of AI.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100117"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The ethics of bioinspired animal-robot interaction: A relational meta-ethical approach","authors":"Marco Tamborini","doi":"10.1016/j.jrt.2025.100116","DOIUrl":"10.1016/j.jrt.2025.100116","url":null,"abstract":"<div><div>In this article, I focus on a specific aspect of biorobotics: biohybrid interaction between bioinspired robots and animals. My goal is to analyze the ethical and epistemic implications of this practice, starting with a central question<em>:</em> Is it ethically permissible to have a bioinspired robot that mimics and reproduces the behaviors and/or morphology of an animal interact with a particular population, even if the animals do not know that the object they are interacting with is a robot and not a conspecific? My answer to the ethical question is that the interaction between animals and bioinspired robots is ethically acceptable if the animal actively participates in the language game (sense Coeckelbergh) established with the robot. I proceed as follows: First, I define the field of biorobotics and describe its four macro-categories. Second, I present concrete examples of interactive biorobotics, showing two emblematic cases in which the relationship between bioinspired robots and animals plays a central role. Third, I address one key issue—among many—in applied ethics regarding my ethical question. Fourth, I explore the ethical question on a metaethical level, making use of the theories of David Gunkel and Mark Coeckelbergh, as well as the linguistic approach and ethics of the late Ludwig Wittgenstein. Last, I argue that from a meta-ethical approach the original ethical question turns out to be misplaced. The ethical boundary lies not in the distinction between a real or fake relationship between the robot and the organism, but in the degree of mutual participation and understanding between the entities involved.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100116"},"PeriodicalIF":0.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflexivity and AI start-ups: A collective virtue for dynamic teams","authors":"Marco Innocenti","doi":"10.1016/j.jrt.2025.100115","DOIUrl":"10.1016/j.jrt.2025.100115","url":null,"abstract":"<div><div>This paper investigates the ethical challenges faced by AI-driven start-ups, where the rapid pace of innovation and limited resources often preclude team members from fully understanding the product under development or its societal implications. We propose the concept of “swarm moral reflexivity”, where ethical reflection emerges collectively from the interactions of individuals focused on their specific tasks. Drawing on Swarm Intelligence theories and Alasdair MacIntyre's framework of moral deliberation, this approach enables teams to engage with ethical issues through daily encounters with conflicting responsibilities, rather than relying on top-down value systems or comprehensive ethical oversight. Our model suggests that decentralised, collective moral awareness can effectively support Responsible Innovation in AI start-ups, ensuring that ethical concerns are recognised and addressed throughout the development process, even in fast-paced and resource-constrained environments.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100115"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143685401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beverley Townsend , Katie J. Parnell , Sinem Getir Yaman , Gabriel Nemirovsky , Radu Calinescu
{"title":"Normative conflict resolution through human–autonomous agent interaction","authors":"Beverley Townsend , Katie J. Parnell , Sinem Getir Yaman , Gabriel Nemirovsky , Radu Calinescu","doi":"10.1016/j.jrt.2025.100114","DOIUrl":"10.1016/j.jrt.2025.100114","url":null,"abstract":"<div><div>We have become increasingly reliant on the decision-making capabilities of autonomous agents. These decisions are often executed under non-ideal conditions, offer significant moral risk, and directly affect human well-being. Such decisions may involve the choice to optimise one value over another: promoting safety over human autonomy, or ensuring accuracy over fairness, for example. All too often decision-making of this kind requires a level of normative evaluation involving ethically defensible moral choices and value judgements, compromises, and trade-offs. Guided by normative principles such decisions inform the possible courses of action the agent may take and may even change a set of established actionable courses.</div><div>This paper seeks to map the decision-making processes in normative choice scenarios wherein autonomous agents are intrinsically linked to the decision process. A care-robot is used to illustrate how a normative choice - underpinned by normative principles - arises, where the agent must ‘choose’ an actionable path involving the administration of critical or non-critical medication. Critically, the choice is dependent upon the trade-off involving two normative principles: respect for human autonomy and the prevention of harm. An additional dimension is presented, that of the inclusion of the urgency of the medication to be administered, which further informs and changes the course of action to be followed.</div><div>We offer a means to map decision-making involving a normative choice within a decision ladder using stakeholder input, and, using defeasibility, we show how specification rules with defeaters can be written to operationalise such choice.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100114"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalie Zelenka , Nina H. Di Cara , Euan Bennet , Phil Clatworthy , Huw Day , Ismael Kherroubi Garcia , Susana Roman Garcia , Vanessa Aisyahsari Hanschke , Emma Siân Kuwertz
{"title":"Data Hazards: An open-source vocabulary of ethical hazards for data-intensive projects","authors":"Natalie Zelenka , Nina H. Di Cara , Euan Bennet , Phil Clatworthy , Huw Day , Ismael Kherroubi Garcia , Susana Roman Garcia , Vanessa Aisyahsari Hanschke , Emma Siân Kuwertz","doi":"10.1016/j.jrt.2025.100110","DOIUrl":"10.1016/j.jrt.2025.100110","url":null,"abstract":"<div><div>Understanding the potential for downstream harms from data-intensive technologies requires strong collaboration across disciplines and with the public. Having shared vocabularies of concerns reduces the communication barriers inherent in this work. The Data Hazards project (<span><span>datahazards.com</span><svg><path></path></svg></span>) contains an open-source, controlled vocabulary of 11 hazards associated with data science work, presented as ‘labels’. Each label has (i) an icon, (ii) a description, (iii) examples, and, crucially, (iv) suggested safety precautions. A reflective discussion format and resources have also been developed. These have been created over three years with feedback from interdisciplinary contributors, and their use evaluated by participants (N=47). The labels include concerns often out-of-scope for ethics committees, like environmental impact. The resources can be used as a structure for interdisciplinary harms discovery work, for communicating hazards, collecting public input or in educational settings. Future versions of the project will develop through feedback from open-source contributions, methodological research and outreach.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100110"},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research","authors":"Aive Pevkur , Kadi Lubi","doi":"10.1016/j.jrt.2025.100113","DOIUrl":"10.1016/j.jrt.2025.100113","url":null,"abstract":"<div><div>The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anu Masso , Jevgenia Gerassimenko , Tayfun Kasapoglu , Mai Beilmann
{"title":"Research ethics committees as knowledge gatekeepers: The impact of emerging technologies on social science research","authors":"Anu Masso , Jevgenia Gerassimenko , Tayfun Kasapoglu , Mai Beilmann","doi":"10.1016/j.jrt.2025.100112","DOIUrl":"10.1016/j.jrt.2025.100112","url":null,"abstract":"<div><div>This article investigates the evolution of research ethics within the social sciences, emphasising the shift from procedural norms borrowed from medical and natural sciences to social scientific discipline-specific and method-based principles. This transformation acknowledges the unique challenges and opportunities in social science research, particularly in the context of emerging data technologies such as digital data, algorithms, and artificial intelligence. Our empirical analysis, based on a survey conducted among international social scientists (N = 214), highlights the precariousness researchers face regarding these technological shifts. Traditional methods remain prevalent, despite the recognition of new digital methodologies that necessitate new ethical principles. We discuss the role of ethics committees as influential gatekeepers, examining power dynamics and access to knowledge within the research landscape. The findings underscore the need for tailored ethical guidelines that accommodate diverse methodological approaches, advocate for interdisciplinary dialogue, and address inequalities in knowledge production. This article contributes to the broader understanding of evolving research ethics in an increasingly data-driven world.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}