Journal of responsible technology最新文献

筛选
英文 中文
Toward an anthropology of screens. Showing and hiding, exposing and protecting. Mauro Carbone and Graziano Lingua. Translated by Sarah De Sanctis. 2023. Cham: Palgrave Macmillan 走向屏幕人类学。展示和隐藏,暴露和保护。Mauro Carbone和Graziano Lingua。萨拉·德·桑蒂斯译,2023年。Cham: Palgrave Macmillan
Journal of responsible technology Pub Date : 2025-02-06 DOI: 10.1016/j.jrt.2025.100111
Paul Trauttmansdorff
{"title":"Toward an anthropology of screens. Showing and hiding, exposing and protecting. Mauro Carbone and Graziano Lingua. Translated by Sarah De Sanctis. 2023. Cham: Palgrave Macmillan","authors":"Paul Trauttmansdorff","doi":"10.1016/j.jrt.2025.100111","DOIUrl":"10.1016/j.jrt.2025.100111","url":null,"abstract":"<div><div>Toward an Anthropology of Screens by Mauro Carbone and Graziano Lingua is an insightful book about the cultural and philosophical significance of screens, which highlights their role in mediating human interactions, reshaping relationships with people and artefacts, and raising ethical questions about their pervasive influence in contemporary life.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100111"},"PeriodicalIF":0.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring research practices with non-native English speakers: A reflective case study 探索非英语母语人士的研究实践:一个反思性案例研究
Journal of responsible technology Pub Date : 2025-02-05 DOI: 10.1016/j.jrt.2025.100109
Marilys Galindo, Teresa Solorzano, Julie Neisler
{"title":"Exploring research practices with non-native English speakers: A reflective case study","authors":"Marilys Galindo,&nbsp;Teresa Solorzano,&nbsp;Julie Neisler","doi":"10.1016/j.jrt.2025.100109","DOIUrl":"10.1016/j.jrt.2025.100109","url":null,"abstract":"<div><div>Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100109"},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Process industry disrupted: AI and the need for human orchestration 流程工业被颠覆:人工智能和对人工编排的需求
Journal of responsible technology Pub Date : 2025-01-29 DOI: 10.1016/j.jrt.2025.100105
M.W. Vegter , V. Blok , R. Wesselink
{"title":"Process industry disrupted: AI and the need for human orchestration","authors":"M.W. Vegter ,&nbsp;V. Blok ,&nbsp;R. Wesselink","doi":"10.1016/j.jrt.2025.100105","DOIUrl":"10.1016/j.jrt.2025.100105","url":null,"abstract":"<div><div>According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue <em>human-centered</em> AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human centred explainable AI decision-making in healthcare 医疗保健中以人为中心的可解释人工智能决策
Journal of responsible technology Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100108
Catharina M. van Leersum , Clara Maathuis
{"title":"Human centred explainable AI decision-making in healthcare","authors":"Catharina M. van Leersum ,&nbsp;Clara Maathuis","doi":"10.1016/j.jrt.2025.100108","DOIUrl":"10.1016/j.jrt.2025.100108","url":null,"abstract":"<div><div>Human-centred AI (HCAI<span><span><sup>1</sup></span></span>) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on <em>explainable AI</em> (XAI<span><span><sup>2</sup></span></span>) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.</div><div>A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called <em>human-centred explainable AI</em> (HCXAI<span><span><sup>3</sup></span></span>). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.</div><div>To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized governance in action: A governance framework of digital responsibility in startups 行动中的分散治理:创业公司数字责任的治理框架
Journal of responsible technology Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100107
Yangyang Zhao , Jiajun Qiu
{"title":"Decentralized governance in action: A governance framework of digital responsibility in startups","authors":"Yangyang Zhao ,&nbsp;Jiajun Qiu","doi":"10.1016/j.jrt.2025.100107","DOIUrl":"10.1016/j.jrt.2025.100107","url":null,"abstract":"<div><div>The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring expert and public perceptions of answerability and trustworthy autonomous systems 探索专家和公众对可回答性和可信赖的自治系统的看法
Journal of responsible technology Pub Date : 2025-01-09 DOI: 10.1016/j.jrt.2025.100106
Louise Hatherall, Nayha Sethi
{"title":"Exploring expert and public perceptions of answerability and trustworthy autonomous systems","authors":"Louise Hatherall,&nbsp;Nayha Sethi","doi":"10.1016/j.jrt.2025.100106","DOIUrl":"10.1016/j.jrt.2025.100106","url":null,"abstract":"<div><div>The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring ethical frontiers of artificial intelligence in marketing 探索营销中人工智能的伦理前沿
Journal of responsible technology Pub Date : 2024-12-18 DOI: 10.1016/j.jrt.2024.100103
Harinder Hari , Arun Sharma , Sanjeev Verma , Rijul Chaturvedi
{"title":"Exploring ethical frontiers of artificial intelligence in marketing","authors":"Harinder Hari ,&nbsp;Arun Sharma ,&nbsp;Sanjeev Verma ,&nbsp;Rijul Chaturvedi","doi":"10.1016/j.jrt.2024.100103","DOIUrl":"10.1016/j.jrt.2024.100103","url":null,"abstract":"<div><div>The pervasiveness of artificial intelligence (AI) in consumers' lives is proliferating. For firms, AI offers the potential to connect, serve, and satisfy consumers with posthuman abilities. However, the adoption and usage of this technology face barriers, with ethical concerns emerging as one of the most significant. Yet, much remains unknown about the ethical concerns. Accordingly, to fill the gap, the current study undertakes a comprehensive and systematic review of 445 publications on AI and marketing ethics, utilizing Scientific Procedures and Rationales for Systematic Literature review protocol to conduct performance analysis (quantitative and qualitative) and science mapping (conceptual and intellectual structures) for literature review and the identification of future research directions. Furthermore, the study conducts thematic and content analysis to uncover the themes, clusters, and theories operating in the field, leading to a conceptual framework that lists antecedents, mediators, moderators, and outcomes of ethics in AI in marketing. The findings of the study present future research directions, providing guidance for practitioners and scholars in the area of ethics in AI in marketing.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The heuristics gap in AI ethics: Impact on green AI policies and beyond 人工智能伦理的启发式差距:对绿色人工智能政策及其他方面的影响
Journal of responsible technology Pub Date : 2024-12-16 DOI: 10.1016/j.jrt.2024.100104
Guglielmo Tamburrini
{"title":"The heuristics gap in AI ethics: Impact on green AI policies and beyond","authors":"Guglielmo Tamburrini","doi":"10.1016/j.jrt.2024.100104","DOIUrl":"10.1016/j.jrt.2024.100104","url":null,"abstract":"<div><div>This article analyses the negative impact of heuristic biases on the main goals of AI ethics. These biases are found to hinder the identification of ethical issues in AI, the development of related ethical policies, and their application. This pervasive impact has been mostly neglected, giving rise to what is called here the heuristics gap in AI ethics. This heuristics gap is illustrated using the AI carbon footprint problem as an exemplary case. Psychological work on biases hampering climate warming mitigation actions is specialized to this problem, and novel extensions are proposed by considering heuristic mentalization strategies that one uses to design and interact with AI systems. To mitigate the effects of this heuristics gap, interventions on the design of ethical policies and suitable incentives for AI stakeholders are suggested. Finally, a checklist of questions helping one to investigate systematically this heuristics gap throughout the AI ethics pipeline is provided.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring ethical research issues related to extended reality technologies used with autistic populations 探索与自闭症人群使用的扩展现实技术相关的伦理研究问题
Journal of responsible technology Pub Date : 2024-12-14 DOI: 10.1016/j.jrt.2024.100102
Nigel Newbutt, Ryan Bradley
{"title":"Exploring ethical research issues related to extended reality technologies used with autistic populations","authors":"Nigel Newbutt,&nbsp;Ryan Bradley","doi":"10.1016/j.jrt.2024.100102","DOIUrl":"10.1016/j.jrt.2024.100102","url":null,"abstract":"<div><div>This article provides an exploration of the ethical considerations and challenges surrounding the use of extended reality (XR) technologies with autistic populations. As XR-based research offers promising avenues for supporting autistic individuals, we explore and highlight various ethical concerns are inherent in XR research and application with autistic individuals. Despite its potential, we outline areas of concern related to privacy, security, content regulation, psychological well-being, informed consent, realism, sensory overload, and accessibility. We conclude with the need for tailored ethical frameworks to guide XR research with autistic populations, emphasizing collaboration, accessibility, and safeguarding as key principles and underscore the importance of balancing technological innovation with ethical responsibility to ensure that XR research with autistic populations is conducted with sensitivity, inclusivity, and respect for individual rights and well-being.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100102"},"PeriodicalIF":0.0,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a critical recovery of liberatory PAR for food system transformations: Struggles and strategies in collaborating with radical and progressive food movements in EU-funded R&I projects 以批判的方式恢复解放性 PAR,促进粮食系统转型:在欧盟资助的研究与创新项目中与激进和进步粮食运动合作的斗争与战略
Journal of responsible technology Pub Date : 2024-11-22 DOI: 10.1016/j.jrt.2024.100100
Tobia S. Jones, Anne M.C. Loeber
{"title":"Towards a critical recovery of liberatory PAR for food system transformations: Struggles and strategies in collaborating with radical and progressive food movements in EU-funded R&I projects","authors":"Tobia S. Jones,&nbsp;Anne M.C. Loeber","doi":"10.1016/j.jrt.2024.100100","DOIUrl":"10.1016/j.jrt.2024.100100","url":null,"abstract":"<div><div>From sustainability and justice perspectives, food systems and R&amp;I systems need transformation. Participatory action research (PAR) presents a suitable approach as it enables collaboration between those affected by a social issue and researchers based in universities to co-create knowledge and interventionist actions. However, PAR is often misconstrued even within projects calling for civil society actors to act as full partners in research. To avoid reproducing the very structures and practices in need of transformation, this paper argues for university researchers to team up with members of food movements to engage in ‘liberatory’ forms of PAR. The question is how liberatory PAR's guiding concepts of reciprocal participation, critical recovery and systemic devolution can be enacted in projects that did not start out as PAR projects. Two EU-funded projects on food system transformation serve as a basis to answer this question, generating concrete recommendations for establishing co-creative, mutually liberating, and transdisciplinary research collectives.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"20 ","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信