Journal of responsible technology最新文献

筛选
英文 中文
Jürgen Habermas revisited via Tim Cook's Wikipedia biography: A hermeneutic approach to critical Information Systems research 通过蒂姆-库克的维基百科传记重访于尔根-哈贝马斯:以诠释学方法开展批判性信息系统研究
Journal of responsible technology Pub Date : 2024-08-03 DOI: 10.1016/j.jrt.2024.100090
Reilly Smethurst , Amber G. Young , Ariel D. Wigdor
{"title":"Jürgen Habermas revisited via Tim Cook's Wikipedia biography: A hermeneutic approach to critical Information Systems research","authors":"Reilly Smethurst ,&nbsp;Amber G. Young ,&nbsp;Ariel D. Wigdor","doi":"10.1016/j.jrt.2024.100090","DOIUrl":"10.1016/j.jrt.2024.100090","url":null,"abstract":"<div><div>Critical Information Systems (IS) research is sometimes appreciated for the shades of gray it adds to sunny portraits of technology's emancipatory potential. In this article, we revisit a theory about Wikipedia’s putative freedom from the authority of corporate media's editors and authors. We present the curious example of Tim Cook's Wikipedia biography and its history of crowd-sourced editorial decisions, published on Wikipedia's talk pages. We use a hermeneutic method to subject the theory about Wikipedia's “rational discourse” and “emancipatory potential” to a soft, empirical test. When we examined Cook's Wikipedia biography and its editorial decisions, what we found pertained to authoritative discourse – the opposite of “rational discourse” – as well as Jürgen Habermas's concept of dramaturgical action. Our discussion aims to change how critical scholars think about IS's Habermasian theories and emancipatory technology. Our contribution – a critical intervention – is a clear alternative to mainstream IS research's moral prescriptions and mechanistic causes.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"20 ","pages":"Article 100090"},"PeriodicalIF":0.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000167/pdfft?md5=d36142a2d3fc5a0c1844cf9be7f0ce77&pid=1-s2.0-S2666659624000167-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142310858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding faces: Misalignments of gender identification in automated systems 解码面孔:自动系统中的性别识别误差
Journal of responsible technology Pub Date : 2024-06-17 DOI: 10.1016/j.jrt.2024.100089
Elena Beretta , Cristina Voto , Elena Rozera
{"title":"Decoding faces: Misalignments of gender identification in automated systems","authors":"Elena Beretta ,&nbsp;Cristina Voto ,&nbsp;Elena Rozera","doi":"10.1016/j.jrt.2024.100089","DOIUrl":"10.1016/j.jrt.2024.100089","url":null,"abstract":"<div><p>Automated Facial Analysis technologies, predominantly used for facial detection and recognition, have garnered significant attention in recent years. Although these technologies have seen advancements and widespread adoption, biases embedded within systems have raised ethical concerns. This research aims to delve into the disparities of Automatic Gender Recognition systems (AGRs), particularly their oversimplification of gender identities through a binary lens. Such a reductionist perspective is known to marginalize and misgender individuals. This study set out to investigate the alignment of an individual's gender identity and its expression through the face with societal norms, and the perceived difference between misgendering experiences from machines versus humans. Insights were gathered through an online survey, utilizing an AGR system to simulate misgendering experiences. The overarching goal is to shed light on gender identity nuances and guide the creation of more ethically responsible and inclusive facial recognition software.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"19 ","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000155/pdfft?md5=24b180fd999b7d4970841ecf98f18ac7&pid=1-s2.0-S2666659624000155-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141630428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrastructural justice for responsible software engineering, 负责任软件工程的基础公正、
Journal of responsible technology Pub Date : 2024-06-04 DOI: 10.1016/j.jrt.2024.100087
Sarah Robinson , Jim Buckley , Luigina Ciolfi , Conor Linehan , Clare McInerney , Bashar Nuseibeh , John Twomey , Irum Rauf , John McCarthy
{"title":"Infrastructural justice for responsible software engineering,","authors":"Sarah Robinson ,&nbsp;Jim Buckley ,&nbsp;Luigina Ciolfi ,&nbsp;Conor Linehan ,&nbsp;Clare McInerney ,&nbsp;Bashar Nuseibeh ,&nbsp;John Twomey ,&nbsp;Irum Rauf ,&nbsp;John McCarthy","doi":"10.1016/j.jrt.2024.100087","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100087","url":null,"abstract":"<div><p>In recent years, we have seen many examples of software products unintentionally causing demonstrable harm. Many guidelines for ethical and responsible computing have been developed in response. Dominant approaches typically attribute liability and blame to individual companies or actors, rather than understanding how the working practices, norms, and cultural understandings in the software industry contribute to such outcomes. In this paper, we propose an understanding of responsibility that is infrastructural, relational, and cultural; thus, providing a foundation to better enable responsible software engineering into the future. Our approach draws on Young's (2006) social connection model of responsibility and Star and Ruhleder's (1994) concept of infrastructure. By bringing these theories together we introduce a concept called infrastructural injustice, which offers a new way for software engineers to consider their opportunities for responsible action with respect to society and the planet. We illustrate the utility of this approach by applying it to an Open-Source software communities’ development of Deepfake technology, to find key leverage points of responsibility that are relevant to both Deepfake technology and software engineering more broadly.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"19 ","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000131/pdfft?md5=129d725094c45ad3f08ea3d866a85b49&pid=1-s2.0-S2666659624000131-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
European technological protectionism and the risk of moral isolationism: The case of quantum technology development 欧洲技术保护主义与道德孤立主义的风险:量子技术发展案例
Journal of responsible technology Pub Date : 2024-06-01 DOI: 10.1016/j.jrt.2024.100084
Clare Shelley-Egan, Pieter Vermaas
{"title":"European technological protectionism and the risk of moral isolationism: The case of quantum technology development","authors":"Clare Shelley-Egan,&nbsp;Pieter Vermaas","doi":"10.1016/j.jrt.2024.100084","DOIUrl":"10.1016/j.jrt.2024.100084","url":null,"abstract":"<div><p>In this editorial, we engage with the European Commission's 2023 recommendation calling for risk assessment with Member States on four critical technology areas, including quantum technology. A particular emphasis is put on the risks associated with technology security and technology leakage. Such risks may lead to protectionist measures. Mobilising European normative anchor points that inform the “right impacts” of research and innovation, we argue that a protectionist approach on the part of the European Union can lead to moral isolationism. This, in turn, can limit Europe's contribution to global development with respect to technological advances, sustainable development and quality of life. We contend that decisions on protectionism around quantum technology should not be made with a protectionist mindset about European values.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000106/pdfft?md5=faecb48e04356c91ce7d914c60d69aa6&pid=1-s2.0-S2666659624000106-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141037141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling affordances for AI Governance 人工智能治理的赋能能力
Journal of responsible technology Pub Date : 2024-05-15 DOI: 10.1016/j.jrt.2024.100086
Siri Padmanabhan Poti, Christopher J Stanton
{"title":"Enabling affordances for AI Governance","authors":"Siri Padmanabhan Poti,&nbsp;Christopher J Stanton","doi":"10.1016/j.jrt.2024.100086","DOIUrl":"10.1016/j.jrt.2024.100086","url":null,"abstract":"<div><p>Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100086"},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962400012X/pdfft?md5=9bf6cc548743ad7d2d5c0830773f5145&pid=1-s2.0-S266665962400012X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A place where “You can be who you've always wanted to be…” Examining the ethics of intelligent virtual environments 一个 "你可以成为你一直想成为的人 "的地方...... "探讨智能虚拟环境的伦理问题
Journal of responsible technology Pub Date : 2024-05-15 DOI: 10.1016/j.jrt.2024.100085
Danielle Shanley, Darian Meacham
{"title":"A place where “You can be who you've always wanted to be…” Examining the ethics of intelligent virtual environments","authors":"Danielle Shanley,&nbsp;Darian Meacham","doi":"10.1016/j.jrt.2024.100085","DOIUrl":"10.1016/j.jrt.2024.100085","url":null,"abstract":"<div><p>The rapid development of interactive virtual reality (VR) spaces like VRChat has been made possible due to continuing increases in computer processing power, advances in artificial intelligence (AI) technologies such as natural language processing (NLP), and advances in 3D modelling and spatial and edge computing. Perhaps because these spaces rely on new ways of integrating different forms of advanced computing, such as AI and VR, little is yet known about their potential ethical implications. In this contribution, we provide an overview of key themes frequently discussed in the context of these so-called <em>Intelligent Virtual Environments</em> (IVEs). We highlight different ethical questions and the ways in which they are typically taken up in the literature. We first map how common concerns tend to revolve around technological feasibility and psychological impacts. We then ask how shifting the focus towards more philosophical perspectives might reorient discussions surrounding IVEs, opening up important avenues for future research. Our contribution in this review is to highlight how as active mediators of experience these technologies require critical reflection and should not be evaluated solely in terms of their functionality.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000118/pdfft?md5=07ae452d1cff9888973af4ceb889ddc6&pid=1-s2.0-S2666659624000118-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141047393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital humanism as a bottom-up ethics 作为自下而上伦理的数字人文主义
Journal of responsible technology Pub Date : 2024-03-29 DOI: 10.1016/j.jrt.2024.100082
Gemma Serrano , Francesco Striano , Steven Umbrello
{"title":"Digital humanism as a bottom-up ethics","authors":"Gemma Serrano ,&nbsp;Francesco Striano ,&nbsp;Steven Umbrello","doi":"10.1016/j.jrt.2024.100082","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100082","url":null,"abstract":"<div><p>In this paper, we explore a new perspective on digital humanism, emphasizing the centrality of multi-stakeholder dialogues and a bottom-up approach to surfacing stakeholder values. This approach starkly contrasts with existing frameworks, such as the Vienna Manifesto's top-down digital humanism, which hinges on pre-established first principles. Our approach provides a more flexible, inclusive framework that captures a broader spectrum of ethical considerations, particularly those pertinent to the digital realm. We apply our model to two case studies, comparing the insights generated with those derived from a utilitarian perspective and the Vienna Manifesto's approach. The findings underscore the enhanced effectiveness of our approach in revealing additional, often overlooked stakeholder values, not typically encapsulated by traditional top-down methodologies. Furthermore, this paper positions our digital humanism approach as a powerful tool for framing ethics-by-design, by promoting a narrative that empowers and centralizes stakeholders. As a result, it paves the way for more nuanced, comprehensive ethical considerations in the design and implementation of digital technologies, thereby enriching the existing literature on digital ethics and setting a promising agenda for future research.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000088/pdfft?md5=a40431af04a93c455298d3e1eacfeb46&pid=1-s2.0-S2666659624000088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140330847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do we really need a “Digital Humanism”? A critique based on post-human philosophy of technology and socio-legal techniques 我们真的需要 "数字人文主义 "吗?基于后人类技术哲学和社会法律技术的批判
Journal of responsible technology Pub Date : 2024-03-12 DOI: 10.1016/j.jrt.2024.100080
Federica Buongiorno , Xenia Chiaramonte
{"title":"Do we really need a “Digital Humanism”? A critique based on post-human philosophy of technology and socio-legal techniques","authors":"Federica Buongiorno ,&nbsp;Xenia Chiaramonte","doi":"10.1016/j.jrt.2024.100080","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100080","url":null,"abstract":"<div><p>Few concepts have been subjected to as intense scrutiny in contemporary discourse as that of “humanism.” While these critiques have acknowledged the importance of retaining certain key aspects of humanism, such as rights, freedom, and human dignity, the term has assumed ambivalence, especially in light of post-colonial and gender studies, that cannot be ignored. The “Vienna Manifesto on Digital Humanism,” as well as the recent volume (2022) titled <em>Perspectives on Digital Humanism</em>, bear a complex imprint of this ambivalence. In this contribution, we aim to bring to the forefront and decipher this underlying trace, by considering alternative (non-humanistic) ways to understand human-technologies relations, beyond the dominant neoliberal paradigm (paragraphs 1 and 2); we then analyse those relations within the specific context of legal studies (paragraphs 3 and 4), one in which the interdependency of humans and non-humans shows a specific and complex form of “fundamental ambivalence.”</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000064/pdfft?md5=a83279cb48841b221775aa3aa2b0256f&pid=1-s2.0-S2666659624000064-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140187836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligence as a human life form 作为人类生命形式的智能
Journal of responsible technology Pub Date : 2024-03-11 DOI: 10.1016/j.jrt.2024.100081
Maurizio Ferraris
{"title":"Intelligence as a human life form","authors":"Maurizio Ferraris","doi":"10.1016/j.jrt.2024.100081","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100081","url":null,"abstract":"<div><p>This text aims to counter the anxieties generated by the recent emergence of AI and the criticisms leveled at it, demanding its moralization. It does so by demonstrating that AI is neither new nor is it true intelligence but rather a tool, akin to many others that have long been serving human intelligence and its objectives. In what follows, I offer a broader reflection on technology that aims to contextualize the novelty and singularity attributed to AI within the history of technological developments. My ultimate goal is to relativize the novelty of AI, seeking to alleviate the moral anxieties it currently elicits and encouraging a more normal, optimistic view of it. The first step in understanding AI is indeed to realize that its novelty is only relative, and that AI has many ancestors that, upon closer examination, turn out to be closely related.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000076/pdfft?md5=1b728ab83e058b5709581507a0c2ecfb&pid=1-s2.0-S2666659624000076-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140187837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inherently privacy-preserving vision for trustworthy autonomous systems: Needs and solutions 可信自主系统的内在隐私保护愿景:需求与解决方案
Journal of responsible technology Pub Date : 2024-03-01 DOI: 10.1016/j.jrt.2024.100079
Adam K. Taras , Niko Sünderhauf , Peter Corke , Donald G. Dansereau
{"title":"Inherently privacy-preserving vision for trustworthy autonomous systems: Needs and solutions","authors":"Adam K. Taras ,&nbsp;Niko Sünderhauf ,&nbsp;Peter Corke ,&nbsp;Donald G. Dansereau","doi":"10.1016/j.jrt.2024.100079","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100079","url":null,"abstract":"<div><p>Vision is an effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, identity, and activity of humans within that scene. This raises important questions about the reach, lifespan, and misuse of this information. This paper is a call to action to consider privacy in robotic vision. We propose a specific form of inherent privacy preservation in which no images are captured or could be reconstructed by an attacker, even with full remote access. We present a set of principles by which such systems could be designed, employing data-destroying operations and obfuscation in the optical and analogue domains. These cameras <em>never</em> see a full scene. Our localisation case study demonstrates in simulation four implementations that all fulfil this task. The design space of such systems is vast despite the constraints of optical-analogue processing. We hope to inspire future works that expand the range of applications open to sighted robotic systems.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000052/pdfft?md5=4bc01eda85dc3576e713b1aa99ec1739&pid=1-s2.0-S2666659624000052-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139999894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信