Journal of responsible technology最新文献

筛选
英文 中文
Body as a data artefact: The ontological kaleidoscope framework 作为数据人工制品的身体:本体万花筒框架
Journal of responsible technology Pub Date : 2025-09-15 DOI: 10.1016/j.jrt.2025.100138
Genevieve Smith-Nunes
{"title":"Body as a data artefact: The ontological kaleidoscope framework","authors":"Genevieve Smith-Nunes","doi":"10.1016/j.jrt.2025.100138","DOIUrl":"10.1016/j.jrt.2025.100138","url":null,"abstract":"<div><div>This article introduces the ‘Ontological Kaleidoscope’ framework—a novel methodological approach for examining the complex entanglements between embodiment, data, and ethics in contemporary digital contexts. Drawing from the established traditions of posthumanism and constructionism, this framework aims to offer researchers a systematic method for analysing how human bodies become translated into data artifacts within computational systems. The Ontological Kaleidoscope serves as both a critical lens and analytical tool, revealing the gaps, ruptures, and impossible translations between embodied experience and data infrastructures. This paper situates the framework within existing methodological approaches, demonstrates its theoretical foundations, provides a detailed implementation guide, and illustrates its potential application illustrated through a case study example. By offering a structured yet flexible approach to examining body-data-ethics relationships, the Ontological Kaleidoscope contributes a valuable methodological innovation for scholars investigating the increasingly blurred boundaries between human embodiment and digital representation.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"24 ","pages":"Article 100138"},"PeriodicalIF":0.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical and psychological implications of generative AI in digital afterlife technologies: A systematic literature review on responsible inclusive innovation 生成式人工智能在数字来世技术中的伦理和心理影响:关于负责任的包容性创新的系统文献综述
Journal of responsible technology Pub Date : 2025-09-08 DOI: 10.1016/j.jrt.2025.100136
Mariyono Dwi
{"title":"Ethical and psychological implications of generative AI in digital afterlife technologies: A systematic literature review on responsible inclusive innovation","authors":"Mariyono Dwi","doi":"10.1016/j.jrt.2025.100136","DOIUrl":"10.1016/j.jrt.2025.100136","url":null,"abstract":"<div><div>Rapid advances in generative artificial intelligence (GenAI) have given birth to digital afterlife technologies (DeathTech), which enable the preservation of the voices, memories, and personalities of deceased individuals. This study is a systematic review of 45 scientific articles (2020–2025) using a thematic-SWOT analysis approach and the Responsible Inclusive Innovation (RII) framework, to explore how cultural schemas, inclusive design, and governance models influence the acceptance of DeathTech across cultures. Key findings suggest that ritual adaptation and spiritual meanings are critical to the acceptance of this technology. Jewish and Japanese communities show high acceptance through cultural integration, while Hindu and Luhya communities experience ontological dissonance. Design failures such as linguistic exclusion and ritual incongruence impact marginalized groups. In addition, regulatory gaps exist, especially in post-death privacy protection and algorithmic bias. This study proposes a triadic framework for the development of ethical and equitable DeathTech: cultural mediation, inclusive design, and pluralistic governance. This contribution enriches the study of digital thanatology and provides recommendations for culturally and socially sustainable innovation.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"24 ","pages":"Article 100136"},"PeriodicalIF":0.0,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relational & culture-sensitive AI innovation 关系和文化敏感的人工智能创新
Journal of responsible technology Pub Date : 2025-08-15 DOI: 10.1016/j.jrt.2025.100135
Christian Herzog
{"title":"Relational & culture-sensitive AI innovation","authors":"Christian Herzog","doi":"10.1016/j.jrt.2025.100135","DOIUrl":"10.1016/j.jrt.2025.100135","url":null,"abstract":"<div><div>In this article, I argue that responsibly developing artificial intelligence (AI) solutions necessitates a deep understanding of the cultural landscapes involved. Especially considering AI's universalistic tendencies, varying, possibly conflicting interests, values, and epistemes require special consideration. I advocate for a cultural sensitivity that aligns with a relational ethics perspective to address power asymmetries, empower marginalized groups, and recognize epistemic privileges. By laying out how technology pervades cultures, I contribute to the case for a culturally informed approach to ethics that empowers individuals to autonomously transform (or conserve) cultures. I then discuss a structural integration of cultural studies into the training and practices of technology and AI engineers and officers. By discussing relational ethics as intimately tied to a culturally sensitive approach to technology development, I join proponents of a closer collaboration between ethics and cultural studies that can yield insights and methodologies that respect diverse cultural contexts and challenge the primacy of disruptive innovation.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100135"},"PeriodicalIF":0.0,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring responsible technology through archetypal responses: uncertainty analysis in Leave the World Behind 通过原型反应探索负责任的技术:《把世界抛在后面》中的不确定性分析
Journal of responsible technology Pub Date : 2025-08-15 DOI: 10.1016/j.jrt.2025.100134
Inna Artemova
{"title":"Exploring responsible technology through archetypal responses: uncertainty analysis in Leave the World Behind","authors":"Inna Artemova","doi":"10.1016/j.jrt.2025.100134","DOIUrl":"10.1016/j.jrt.2025.100134","url":null,"abstract":"<div><div>As AI systems grow increasingly opaque and influential, technological uncertainty is central to contemporary human experience. This study examines archetypal human responses to such uncertainty through a semiotic and narratological analysis of the film <em>Leave the World Behind</em>. Drawing on Lotman’s concept of the semiosphere and Propp’s archetypal theory, the research interprets character transformations as universal adaptive patterns. Key archetypes are identified—Adaptive Hero, Knowledge Bearer, Individualistic Survivor, Vulnerable Dependent, Social Mediator, and Digital Escapist—revealing capacities that support resilience in technology-mediated crises: collective meaning-making, emotional intelligence, inter- and intra-generational knowledge exchange, collaborative resilience, and ecological literacy. These findings suggest that sustainable human-technology relations require more than regulatory oversight: they demand inclusive, culturally grounded frameworks that prioritise human values, ethical co-design, and shared interpretive systems. By mapping archetypal responses to systemic disruption, the study contributes to emerging debates on technological governance, posthuman ethics, and the future of human agency in AI-shaped societies.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100134"},"PeriodicalIF":0.0,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144879382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embedding ethical and legal principles in AI-driven clinical practice: two use cases in laboratory diagnostics 在人工智能驱动的临床实践中嵌入伦理和法律原则:实验室诊断中的两个用例
Journal of responsible technology Pub Date : 2025-08-12 DOI: 10.1016/j.jrt.2025.100133
Marieke A.R. Bak , Martijn C. Schut , M. Corrette Ploem
{"title":"Embedding ethical and legal principles in AI-driven clinical practice: two use cases in laboratory diagnostics","authors":"Marieke A.R. Bak ,&nbsp;Martijn C. Schut ,&nbsp;M. Corrette Ploem","doi":"10.1016/j.jrt.2025.100133","DOIUrl":"10.1016/j.jrt.2025.100133","url":null,"abstract":"<div><div>This interdisciplinary paper explores the ethical-legal aspects of artificial intelligence (AI) in medicine. We first describe our ‘Embedded Ethics’ approach which entailed collaboration between an ethicist, legal scholar and AI specialist, and provide a brief overview of AI and the relevant European ethical-legal context. We then identify and analyse ethical and legal issues in clinical AI using two cases from the field of laboratory diagnostics: AI systems in newborn screening and for diagnosing Alzheimer's disease. These use cases reveal key concerns at each of the five phases of AI development and implementation. For every phase, we outline core ethical-legal principles and formulate guiding questions for developers. Concerns include fundamental questions about desirability and proportionality, AI-specific challenges like explainability and bias, and broader issues of shared decision-making and professional responsibility. We conclude with reflections on future challenges for clinical applications of AI systems and interdisciplinary research in this area.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100133"},"PeriodicalIF":0.0,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144892260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial worlds and artificial minds: Authenticity and language learning in digital lifeworlds 人工世界和人工思维:数字生活世界中的真实性和语言学习
Journal of responsible technology Pub Date : 2025-07-29 DOI: 10.1016/j.jrt.2025.100131
Blair Matthews
{"title":"Artificial worlds and artificial minds: Authenticity and language learning in digital lifeworlds","authors":"Blair Matthews","doi":"10.1016/j.jrt.2025.100131","DOIUrl":"10.1016/j.jrt.2025.100131","url":null,"abstract":"<div><div>Language learning is increasingly being extended into digital and online spaces that have been enhanced by simulated reality and augmented with data and artificial intelligence. While this may expand opportunities for language learning, some critics argue that digital spaces may represent a pastiche or a parody of reality. However, while there are genuine issues, such criticisms may often fall back on naïve or essentialist views of authenticity, in particular by narrowing language learning scenarios to real-life or genuine communication. I argue that research undersocialises authenticity by not taking social relations into sufficient consideration, which denies or elides the ways that authenticity is achieved. In this conceptual paper, I offer a relational account of authenticity, where I conceive digital environments within a stratified ontological framework, where authenticity is not inherent in individuals or texts, but instead emerges from complex social contexts. Authenticity, then, does not refer to authenticity of texts or “being oneself”, but authenticity in relation to others. A stratified ontology provides opportunities to extend relations with others, offering what is described as a “submersion into a temporary agency”, where language learners can experiment with the social order in order to achieve authenticity of themselves in the target language. Finally, I present a relational pedagogy based on responsiveness, where feedback is distributed among disparate human and technical actors which facilitate, problematise or endorse authenticity.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144757674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a responsible and ethical authorization to operate: A case study in AI consulting 走向负责任和道德的运营授权:人工智能咨询的案例研究
Journal of responsible technology Pub Date : 2025-07-24 DOI: 10.1016/j.jrt.2025.100130
Jason M. Pittman , Geoff Schaefer
{"title":"Toward a responsible and ethical authorization to operate: A case study in AI consulting","authors":"Jason M. Pittman ,&nbsp;Geoff Schaefer","doi":"10.1016/j.jrt.2025.100130","DOIUrl":"10.1016/j.jrt.2025.100130","url":null,"abstract":"<div><div>The US federal government mandates all technologies receive an Authorization to Operate (ATO). The ATO serves as a testament to the technology's security compliance. This process underscores a fundamental belief: technologies must conform to established security norms. Yet, the security-centric view does not include ethical and responsible AI. Unlike security parameters, ethical and responsible AI lacks a standardized framework for evaluation. This leaves a critical gap in AI governance. This paper presents our consulting experiences in addressing such a gap and introduces a pioneering ATO assessment instrument. The instrument integrates ethical and responsible AI principles into assessment decision-making. We delve into the instrument's design, shedding light on unique attributes and features. Furthermore, we discuss emergent best practices related to this ATO instrument. These include potential decision pitfalls of interest to practitioners and policymakers alike. Looking ahead, we envision an evolved version of this ethical and responsible ATO. This future iteration incorporates continuous monitoring capabilities and novel ethical measures. Finally, we offer insights for the AI community to evaluate their AI decision-making.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144721687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unravelling responsibility for AI 解开人工智能的责任
Journal of responsible technology Pub Date : 2025-07-23 DOI: 10.1016/j.jrt.2025.100124
Zoe Porter , Philippa Ryan , Phillip Morgan , Joanna Al-Qaddoumi , Bernard Twomey , Paul Noordhof , John McDermid , Ibrahim Habli
{"title":"Unravelling responsibility for AI","authors":"Zoe Porter ,&nbsp;Philippa Ryan ,&nbsp;Phillip Morgan ,&nbsp;Joanna Al-Qaddoumi ,&nbsp;Bernard Twomey ,&nbsp;Paul Noordhof ,&nbsp;John McDermid ,&nbsp;Ibrahim Habli","doi":"10.1016/j.jrt.2025.100124","DOIUrl":"10.1016/j.jrt.2025.100124","url":null,"abstract":"<div><div>It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what ‘responsibility’ means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation ‘Actor A is responsible for Occurrence O,’ the framework unravels the concept of responsibility to clarify that there are different possibilities of <em>who</em> is responsible for AI, <em>senses</em> in which they are responsible, and <em>aspects of events</em> they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144739109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intersecting social identity and drone use in humanitarian contexts: Psychological insights for legal decisions and responsible innovation 人道主义背景下的交叉社会身份和无人机使用:法律决策和负责任创新的心理学见解
Journal of responsible technology Pub Date : 2025-07-23 DOI: 10.1016/j.jrt.2025.100129
Anastasia Kordoni , Mark Levine , Amel Bennaceur , Carlos Gavidia-Calderon , Bashar Nuseibeh
{"title":"Intersecting social identity and drone use in humanitarian contexts: Psychological insights for legal decisions and responsible innovation","authors":"Anastasia Kordoni ,&nbsp;Mark Levine ,&nbsp;Amel Bennaceur ,&nbsp;Carlos Gavidia-Calderon ,&nbsp;Bashar Nuseibeh","doi":"10.1016/j.jrt.2025.100129","DOIUrl":"10.1016/j.jrt.2025.100129","url":null,"abstract":"<div><div>While the technical and ethical challenges of using drones in Search-and-Rescue operations for transnationally displaced individuals have been explored, how drone footage can shape psychological processes at play and impact post-rescue legal decision-making has been overlooked. This paper investigates how transnationally displaced individuals' social identities are portrayed in court and the role of drone footage in reinforcing these identities. We conducted a discourse analysis of 11 open-access asylum and deportation cases following drone-assisted Search-and-Rescue operations at sea (2015–2021). Our results suggest two primary identity constructions: as victims and as traffickers, each underpinned by conflicting psychological processes. The defence portrayed the defendants through the lens of vulnerability, while the prosecution through unlawfulness. Psychological attributions of drone footage contributed differently to identity portrayal, influencing legal decisions regarding the status and entitlements of transnationally displaced individuals. We discuss the socio-ethical implications of these findings and propose a psychosocial account for responsible innovation in technology mediated humanitarian contexts.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144724925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating the complexities of AI and digital governance: the 5W1H framework 驾驭人工智能和数字治理的复杂性:5W1H框架
Journal of responsible technology Pub Date : 2025-07-18 DOI: 10.1016/j.jrt.2025.100127
S. Matthew Liao , Iskandar Haykel , Katherine Cheung , Taylor Matalon
{"title":"Navigating the complexities of AI and digital governance: the 5W1H framework","authors":"S. Matthew Liao ,&nbsp;Iskandar Haykel ,&nbsp;Katherine Cheung ,&nbsp;Taylor Matalon","doi":"10.1016/j.jrt.2025.100127","DOIUrl":"10.1016/j.jrt.2025.100127","url":null,"abstract":"<div><div>As AI and digital technologies advance rapidly, governance frameworks struggle to keep pace with emerging applications and risks. This paper introduces a \"5W1H\" framework to systematically analyze AI governance proposals through six key questions: <em>What</em> should be regulated (data, algorithms, sectors, or risk levels), <em>Why</em> regulate (ethics, legal compliance, market failures, or national interests), <em>Who</em> should regulate (industry, government, or public stakeholders), <em>When</em> regulation should occur (upstream, downstream, or lifecycle approaches), <em>Where</em> it should take place (local, national, or international levels), and <em>How</em> it should be enacted (hard versus soft regulation). The framework is applied to compare the European Union's AI Act with the current U.S. regulatory landscape, revealing the EU's comprehensive, risk-based approach versus America's fragmented, sector-specific strategy. By providing a structured analytical tool, the 5W1H framework helps policymakers, researchers, and stakeholders navigate complex AI governance decisions and identify areas for improvement in existing regulatory approaches.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144696721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信