{"title":"Human centred explainable AI decision-making in healthcare","authors":"Catharina M. van Leersum , Clara Maathuis","doi":"10.1016/j.jrt.2025.100108","DOIUrl":"10.1016/j.jrt.2025.100108","url":null,"abstract":"<div><div>Human-centred AI (HCAI<span><span><sup>1</sup></span></span>) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on <em>explainable AI</em> (XAI<span><span><sup>2</sup></span></span>) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.</div><div>A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called <em>human-centred explainable AI</em> (HCXAI<span><span><sup>3</sup></span></span>). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.</div><div>To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decentralized governance in action: A governance framework of digital responsibility in startups","authors":"Yangyang Zhao , Jiajun Qiu","doi":"10.1016/j.jrt.2025.100107","DOIUrl":"10.1016/j.jrt.2025.100107","url":null,"abstract":"<div><div>The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring expert and public perceptions of answerability and trustworthy autonomous systems","authors":"Louise Hatherall, Nayha Sethi","doi":"10.1016/j.jrt.2025.100106","DOIUrl":"10.1016/j.jrt.2025.100106","url":null,"abstract":"<div><div>The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring ethical frontiers of artificial intelligence in marketing","authors":"Harinder Hari , Arun Sharma , Sanjeev Verma , Rijul Chaturvedi","doi":"10.1016/j.jrt.2024.100103","DOIUrl":"10.1016/j.jrt.2024.100103","url":null,"abstract":"<div><div>The pervasiveness of artificial intelligence (AI) in consumers' lives is proliferating. For firms, AI offers the potential to connect, serve, and satisfy consumers with posthuman abilities. However, the adoption and usage of this technology face barriers, with ethical concerns emerging as one of the most significant. Yet, much remains unknown about the ethical concerns. Accordingly, to fill the gap, the current study undertakes a comprehensive and systematic review of 445 publications on AI and marketing ethics, utilizing Scientific Procedures and Rationales for Systematic Literature review protocol to conduct performance analysis (quantitative and qualitative) and science mapping (conceptual and intellectual structures) for literature review and the identification of future research directions. Furthermore, the study conducts thematic and content analysis to uncover the themes, clusters, and theories operating in the field, leading to a conceptual framework that lists antecedents, mediators, moderators, and outcomes of ethics in AI in marketing. The findings of the study present future research directions, providing guidance for practitioners and scholars in the area of ethics in AI in marketing.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The heuristics gap in AI ethics: Impact on green AI policies and beyond","authors":"Guglielmo Tamburrini","doi":"10.1016/j.jrt.2024.100104","DOIUrl":"10.1016/j.jrt.2024.100104","url":null,"abstract":"<div><div>This article analyses the negative impact of heuristic biases on the main goals of AI ethics. These biases are found to hinder the identification of ethical issues in AI, the development of related ethical policies, and their application. This pervasive impact has been mostly neglected, giving rise to what is called here the heuristics gap in AI ethics. This heuristics gap is illustrated using the AI carbon footprint problem as an exemplary case. Psychological work on biases hampering climate warming mitigation actions is specialized to this problem, and novel extensions are proposed by considering heuristic mentalization strategies that one uses to design and interact with AI systems. To mitigate the effects of this heuristics gap, interventions on the design of ethical policies and suitable incentives for AI stakeholders are suggested. Finally, a checklist of questions helping one to investigate systematically this heuristics gap throughout the AI ethics pipeline is provided.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring ethical research issues related to extended reality technologies used with autistic populations","authors":"Nigel Newbutt, Ryan Bradley","doi":"10.1016/j.jrt.2024.100102","DOIUrl":"10.1016/j.jrt.2024.100102","url":null,"abstract":"<div><div>This article provides an exploration of the ethical considerations and challenges surrounding the use of extended reality (XR) technologies with autistic populations. As XR-based research offers promising avenues for supporting autistic individuals, we explore and highlight various ethical concerns are inherent in XR research and application with autistic individuals. Despite its potential, we outline areas of concern related to privacy, security, content regulation, psychological well-being, informed consent, realism, sensory overload, and accessibility. We conclude with the need for tailored ethical frameworks to guide XR research with autistic populations, emphasizing collaboration, accessibility, and safeguarding as key principles and underscore the importance of balancing technological innovation with ethical responsibility to ensure that XR research with autistic populations is conducted with sensitivity, inclusivity, and respect for individual rights and well-being.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100102"},"PeriodicalIF":0.0,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a critical recovery of liberatory PAR for food system transformations: Struggles and strategies in collaborating with radical and progressive food movements in EU-funded R&I projects","authors":"Tobia S. Jones, Anne M.C. Loeber","doi":"10.1016/j.jrt.2024.100100","DOIUrl":"10.1016/j.jrt.2024.100100","url":null,"abstract":"<div><div>From sustainability and justice perspectives, food systems and R&I systems need transformation. Participatory action research (PAR) presents a suitable approach as it enables collaboration between those affected by a social issue and researchers based in universities to co-create knowledge and interventionist actions. However, PAR is often misconstrued even within projects calling for civil society actors to act as full partners in research. To avoid reproducing the very structures and practices in need of transformation, this paper argues for university researchers to team up with members of food movements to engage in ‘liberatory’ forms of PAR. The question is how liberatory PAR's guiding concepts of reciprocal participation, critical recovery and systemic devolution can be enacted in projects that did not start out as PAR projects. Two EU-funded projects on food system transformation serve as a basis to answer this question, generating concrete recommendations for establishing co-creative, mutually liberating, and transdisciplinary research collectives.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"20 ","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a research ethics of real-world experimentation with emerging technology","authors":"Joost Mollen","doi":"10.1016/j.jrt.2024.100098","DOIUrl":"10.1016/j.jrt.2024.100098","url":null,"abstract":"<div><div>Testing emerging technologies, such as autonomous vehicles, predictive crime analytics, and smart city interventions under real-world conditions is an important strategy for robust and responsible technology development. However, the moral responsibilities of researchers towards the public when conducting such real-world experiments are often left unaddressed and unregulated. This article argues that there are problematic inconsistencies in research ethics demands and protections across different categories of research and development with emerging digital technologies. This differential treatment is problematic since there are no meaningful differences to justify it, and it creates the possibility of regulatory evasion at the cost of populations’ due protection. Hence, I argue that this differential treatment should be amended by harmonizing research ethics demands. In doing so, this paper contributes to several ongoing scholarly debates on the limits of current research ethics guidelines and protocols in the face of novel technologies and research formats.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"20 ","pages":"Article 100098"},"PeriodicalIF":0.0,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier Guerrero-C , Nomtika Mjwana , Sebastian Leon-Giraldo , Sara L.M. Davis
{"title":"Brave global spaces: Researching digital health and human rights through transnational participatory action research","authors":"Javier Guerrero-C , Nomtika Mjwana , Sebastian Leon-Giraldo , Sara L.M. Davis","doi":"10.1016/j.jrt.2024.100097","DOIUrl":"10.1016/j.jrt.2024.100097","url":null,"abstract":"<div><div>In this paper we reflect on our experience with applying Transnational Participatory Action Research (TPAR) to a multi-country study of digital health and human rights of young adults living with and affected by HIV in five low- and middle-income countries (LMICs), and identify some lessons learned for future projects. First, we propose a definition of TPAR based on our experience and our analysis of power in the project. We present an overview of the research design and implementation, which melded diverse working cultures and research methods. Next, we describe how we adapted outputs, working methods and terminology to meet the diverse and specific needs of civil society organizations, community-led networks and academics working in diverse national and transnational spaces. This required us to understand and adapt to the different temporalities at play. The creation of brave spaces and the development of an intersectional lens were key to addressing tensions that naturally emerged in our collaboration. Finally, we summarize lessons learns and challenges for the next stage of the project.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"20 ","pages":"Article 100097"},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Ryan , Eugen Popa , Vincent Blok , Andrea Declich , Maresa Berliri , Alfonso Alfonsi , Simeon Veloudis , Natalia Costanzo , Martina Iannuzzi
{"title":"Start doing the right thing: Indicators for socially responsible start-ups and investors","authors":"Mark Ryan , Eugen Popa , Vincent Blok , Andrea Declich , Maresa Berliri , Alfonso Alfonsi , Simeon Veloudis , Natalia Costanzo , Martina Iannuzzi","doi":"10.1016/j.jrt.2024.100094","DOIUrl":"10.1016/j.jrt.2024.100094","url":null,"abstract":"<div><div>This paper explores the gap in the literature on social responsibility guidance for start-ups and start-up investors. It begins by evaluating research conducted in two different fields (namely, socially responsible investment (SRI) and responsible research and innovation (RRI)) and how they can guide social responsibility in STEM (Science, Technology, Engineering, Mathematics) start-ups. To do this, we evaluate an industry-standard SRI catalogue of metrics - the Global Impact Investing Network's (GIIN) <em>Impact Reporting and Investment Standards</em> (IRIS+) - and indicators from 12 EC-funded RRI projects. Based on this analysis, we propose a framework of 24 indicators to assess the social responsibility of start-ups and investors. The purpose of our framework is twofold: firstly, to provide clear guidance for start-ups aiming to implement socially responsible behaviours, and secondly, to provide start-up investors with criteria to identify if start-ups are socially responsible. While the indicators are phrased in a prescriptive way for start-ups, they can also be used by investors to identify if start-ups are implementing the indicators in practice.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"20 ","pages":"Article 100094"},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}