Journal of responsible technology最新文献

筛选
英文 中文
A Neo-Republican Critique of AI ethics 新共和主义对人工智能伦理的批判
Journal of responsible technology Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100022
Jonne Maas
{"title":"A Neo-Republican Critique of AI ethics","authors":"Jonne Maas","doi":"10.1016/j.jrt.2021.100022","DOIUrl":"10.1016/j.jrt.2021.100022","url":null,"abstract":"<div><p>The AI Ethics literature, aimed to responsibly develop AI systems, widely agrees on the fact that society is in dire need for effective accountability mechanisms with regards to AI systems. Particularly, machine learning (ML) systems cause reason for concern due to their opaque and self-learning characteristics. Nevertheless, what such accountability mechanisms should look like remains either largely unspecified (e.g., ‘stakeholder input’) or ineffective (e.g., ‘ethical guidelines’). In this paper, I argue that the difficulty to formulate and develop effective accountability mechanisms lies partly in the predominant focus on Mill's harm's principle, rooted in the conception of freedom as non-interference. A strong focus on harm overcasts other moral wrongs, such as potentially problematic power dynamics between those who shape the system and those affected by it. I propose that the neo-republican conception of freedom as non-domination provides a suitable framework to inform responsible ML development. Domination, understood by neo-republicans, is a moral wrong as it undermines the potential for human flourishing. In order to mitigate domination, neo-republicans plead for accountability mechanisms that minimize arbitrary relations of power. Neo-republicanism should hence inform responsible ML development as it provides substantive and concrete grounds when accountability mechanisms are effective (i.e. when they are non-dominating).</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000159/pdfft?md5=7daecef4049ab13fc8e727405845c76d&pid=1-s2.0-S2666659621000159-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42936952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The role of empathy for artificial intelligence accountability 同理心在人工智能问责制中的作用
Journal of responsible technology Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100021
Ramya Srinivasan , Beatriz San Miguel González
{"title":"The role of empathy for artificial intelligence accountability","authors":"Ramya Srinivasan ,&nbsp;Beatriz San Miguel González","doi":"10.1016/j.jrt.2021.100021","DOIUrl":"10.1016/j.jrt.2021.100021","url":null,"abstract":"<div><p>Accountability encompasses multiple aspects such as responsibility, justification, reporting, traceability, audit, and redress so as to satisfy diverse requirements of different stakeholders— consumers, regulators, developers, etc. In order to take into account needs of different stakeholders and thus, to put into practice accountability in Artificial Intelligence, the notion of <em>empathy</em> can be quite effective. Empathy is the ability to be sensitive to the needs of someone based on understanding their affective states and intentions, caring for their feelings, and socialization, which can help in addressing the social-technical challenges associated with accountability. The goal of this paper is twofold. First, we elucidate the connections between empathy and accountability, drawing find- ings from various disciplines like psychology, social science, and organizational science. Second, we suggest potential pathways to incorporate empathy.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000147/pdfft?md5=d62d56f6632065dfd35eac30df62d0ad&pid=1-s2.0-S2666659621000147-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Accountability of platform providers for unlawful personal data processing in their ecosystems–A socio-techno-legal analysis of Facebook and Apple's iOS according to GDPR 平台提供商在其生态系统中非法处理个人数据的责任——根据GDPR对Facebook和苹果iOS的社会技术法律分析
Journal of responsible technology Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100018
Christian Kurtz , Florian Wittner , Martin Semmann , Wolfgang Schulz , Tilo Böhmann
{"title":"Accountability of platform providers for unlawful personal data processing in their ecosystems–A socio-techno-legal analysis of Facebook and Apple's iOS according to GDPR","authors":"Christian Kurtz ,&nbsp;Florian Wittner ,&nbsp;Martin Semmann ,&nbsp;Wolfgang Schulz ,&nbsp;Tilo Böhmann","doi":"10.1016/j.jrt.2021.100018","DOIUrl":"10.1016/j.jrt.2021.100018","url":null,"abstract":"<div><p>Billions of people interact within platform-based ecosystems containing the personal data of their daily lives. Data which have become rigorously creatable, processable, and shareable. Here, platform providers facilitate interactions between three types of relevant actors: users, service providers, and third parties. Research in the information systems field has shown that platform providers influence their platform ecosystems to promote the contributions of service providers and exercise control by utilizing boundary resources. Through a socio-techno-legal analysis of two high-profile cases and their application on the General Data Protection Regulation (GDPR) we show that the boundary resource design, arrangement, and interplay can influence whether and to what extent platform providers are accountable for platform providers unlawful personal data processing in platform ecosystems. The findings can have a huge impact to account actors for personal data misusage in platform ecosystems and, thus, the protection of personal liberty and rights in such socio-technical systems.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000111/pdfft?md5=973ab4afa4f2d1cc53f217345202fb68&pid=1-s2.0-S2666659621000111-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Responsible governance of civilian unmanned aerial vehicle (UAV) innovations for Indian crop insurance applications 负责任地管理用于印度作物保险应用的民用无人机创新
Journal of responsible technology Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2022.100025
Anjan Chamuah, Rajbeer Singh
{"title":"Responsible governance of civilian unmanned aerial vehicle (UAV) innovations for Indian crop insurance applications","authors":"Anjan Chamuah,&nbsp;Rajbeer Singh","doi":"10.1016/j.jrt.2022.100025","DOIUrl":"10.1016/j.jrt.2022.100025","url":null,"abstract":"<div><p>Civilian Unmanned Aerial Vehicle (UAV) is an emerging technology in Indian crop insurance applications. The technology is new to an agro-based country like India with diverse socio-cultural norms and values. However, in such a diverse democracy, UAV governance and deployment pose a significant challenge and risk. In other words, charting out a proper framework for a risk-free implementation of this governance has emerged as a leading research topic in the concerned discipline. In innovations literature, Responsible Innovation (RI) takes care of emerging technology governance; thus, RI becomes significant as a theoretical framework. The study is intended to find out <strong>how the framework of RI enables responsible governance and also who are the main actors and stakeholders of governance and deployment of civilian UAVs in crop insurance applications in India</strong>? An in-depth interview method and snowball sampling technique have been employed to identify interviewees from Delhi, Gujarat, and Rajasthan. Findings suggest that civilian UAV is effective in handling risk, crop damage assessment, and claim settlement. The RI approach, through its dimensions and steps, enables equal participation and deliberation among all the actors and stakeholders of UAV governance that consists of government bodies, research organizations, insurance agencies, local administration, and farmers. Effective regulations, adhering to accountability, and responsibility promote responsible governance.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000026/pdfft?md5=6fcb8e9ad2745a0da20c9119b0d88eeb&pid=1-s2.0-S2666659622000026-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48673717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction 决策点困境:人类与人工智能交互中的另一个责任问题
Journal of responsible technology Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100013
Laura Crompton
{"title":"The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction","authors":"Laura Crompton","doi":"10.1016/j.jrt.2021.100013","DOIUrl":"10.1016/j.jrt.2021.100013","url":null,"abstract":"<div><p>AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000068/pdfft?md5=e8634dde79377a2caf85de3bcbdd39b1&pid=1-s2.0-S2666659621000068-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48110664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable 专家可解释性:使支持专家决策的算法更具可解释性的设计框架
Journal of responsible technology Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100017
Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones
{"title":"Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable","authors":"Auste Simkute ,&nbsp;Ewa Luger ,&nbsp;Bronwyn Jones ,&nbsp;Michael Evans ,&nbsp;Rhianne Jones","doi":"10.1016/j.jrt.2021.100017","DOIUrl":"10.1016/j.jrt.2021.100017","url":null,"abstract":"<div><p>Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100010X/pdfft?md5=209e9bba6d0a6ab1de48f2f469aae35b&pid=1-s2.0-S266665962100010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42673610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
“Computer says no”: Algorithmic decision support and organisational responsibility “计算机拒绝”:算法决策支持和组织责任
Journal of responsible technology Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100014
Angelika Adensamer , Rita Gsenger , Lukas Daniel Klausner
{"title":"“Computer says no”: Algorithmic decision support and organisational responsibility","authors":"Angelika Adensamer ,&nbsp;Rita Gsenger ,&nbsp;Lukas Daniel Klausner","doi":"10.1016/j.jrt.2021.100014","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100014","url":null,"abstract":"<div><p>Algorithmic decision support is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people’s lives. Its use raises questions, among others, about accountability, transparency and responsibility. While there is substantial research on the issue of algorithmic systems and responsibility in general, there is little to no prior research on <em>organisational</em> responsibility and its attribution. Our article aims to fill that gap; we give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100007X/pdfft?md5=66c50c16e31d2aebf63b1f07b3c84789&pid=1-s2.0-S266665962100007X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72106858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency 论坛的机构:通过机构视角进行算法问责的机制
Journal of responsible technology Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100015
Florian Cech
{"title":"The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency","authors":"Florian Cech","doi":"10.1016/j.jrt.2021.100015","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100015","url":null,"abstract":"<div><p>The wicked challenge of designing accountability measures aimed at improving algorithmic accountability demands human-centered approaches. Based on one of the most common definitions of accountability as the relationship between an actor and a forum, this article presents an analytic lens in the form of actor and forum agency, through which the accountability process can be analysed. Two case studies - the Austrian Public Employment Service’s AMAS system and the EnerCoach energy accounting system, serve as examples to an analysis of accountability based on the agency of the stakeholders. Developed through the comparison of the two systems, the Algorithmic Accountability Agency Framework (A<sup>3</sup> framework) aimed at supporting the analysis and the improvement of agency throughout the four steps of the accountability process is presented and discussed.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000081/pdfft?md5=0c4add516911afa8f58f6e10d59434da&pid=1-s2.0-S2666659621000081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72107121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Causality-based accountability mechanisms for socio-technical systems 基于因果关系的社会技术系统问责机制
Journal of responsible technology Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100016
Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner
{"title":"Causality-based accountability mechanisms for socio-technical systems","authors":"Amjad Ibrahim,&nbsp;Stavros Kyriakopoulos,&nbsp;Alexander Pretschner","doi":"10.1016/j.jrt.2021.100016","DOIUrl":"10.1016/j.jrt.2021.100016","url":null,"abstract":"<div><p>With the rapid deployment of socio-technical systems into all aspects of daily life, we need to be prepared for their failures. It is inherently impractical to specify all the lawful interactions of these systems, in turn, the possibility of invalid interactions cannot be excluded at design time. As modern systems might harm people, or compromise assets if they fail, they ought to be accountable. Accountability is an interdisciplinary concept that cannot be easily described as a holistic technical property of a system. Thus, in this paper, we propose a bottom-up approach to enable accountability using goal-specific accountability mechanisms. Each mechanism provides forensic capabilities that help us to identify the root cause for a specific type of events, both to eliminate the underlying (technical) problem and to assign blame. This paper presents the different ingredients that are required to design and build an accountability mechanism and focuses on the technical and practical utilization of causality theories as a cornerstone to achieve our goal. To the best of our knowledge, the literature lacks a systematic methodology to envision, design, and implement abilities that promote accountability in systems. With a case study from the area of microservice-based systems, which we deem representative of modern complex systems, we demonstrate the effectiveness of the approach as a whole. We show that it is generic enough to accommodate different accountability goals and mechanisms.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000093/pdfft?md5=70a06e5c6bb7727c37ce86ad9a9191e0&pid=1-s2.0-S2666659621000093-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46987346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Role of Engineers in Harmonising Human Values for AI Systems Design 工程师在人工智能系统设计中协调人类价值观的作用
Journal of responsible technology Pub Date : 2021-09-13 DOI: 10.21203/rs.3.rs-709596/v1
Steven Umbrello
{"title":"The Role of Engineers in Harmonising Human Values for AI Systems Design","authors":"Steven Umbrello","doi":"10.21203/rs.3.rs-709596/v1","DOIUrl":"https://doi.org/10.21203/rs.3.rs-709596/v1","url":null,"abstract":"\u0000 Most engineers work within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41705814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信