{"title":"The role of empathy for artificial intelligence accountability","authors":"Ramya Srinivasan , Beatriz San Miguel González","doi":"10.1016/j.jrt.2021.100021","DOIUrl":"10.1016/j.jrt.2021.100021","url":null,"abstract":"<div><p>Accountability encompasses multiple aspects such as responsibility, justification, reporting, traceability, audit, and redress so as to satisfy diverse requirements of different stakeholders— consumers, regulators, developers, etc. In order to take into account needs of different stakeholders and thus, to put into practice accountability in Artificial Intelligence, the notion of <em>empathy</em> can be quite effective. Empathy is the ability to be sensitive to the needs of someone based on understanding their affective states and intentions, caring for their feelings, and socialization, which can help in addressing the social-technical challenges associated with accountability. The goal of this paper is twofold. First, we elucidate the connections between empathy and accountability, drawing find- ings from various disciplines like psychology, social science, and organizational science. Second, we suggest potential pathways to incorporate empathy.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100021"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000147/pdfft?md5=d62d56f6632065dfd35eac30df62d0ad&pid=1-s2.0-S2666659621000147-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Kurtz , Florian Wittner , Martin Semmann , Wolfgang Schulz , Tilo Böhmann
{"title":"Accountability of platform providers for unlawful personal data processing in their ecosystems–A socio-techno-legal analysis of Facebook and Apple's iOS according to GDPR","authors":"Christian Kurtz , Florian Wittner , Martin Semmann , Wolfgang Schulz , Tilo Böhmann","doi":"10.1016/j.jrt.2021.100018","DOIUrl":"10.1016/j.jrt.2021.100018","url":null,"abstract":"<div><p>Billions of people interact within platform-based ecosystems containing the personal data of their daily lives. Data which have become rigorously creatable, processable, and shareable. Here, platform providers facilitate interactions between three types of relevant actors: users, service providers, and third parties. Research in the information systems field has shown that platform providers influence their platform ecosystems to promote the contributions of service providers and exercise control by utilizing boundary resources. Through a socio-techno-legal analysis of two high-profile cases and their application on the General Data Protection Regulation (GDPR) we show that the boundary resource design, arrangement, and interplay can influence whether and to what extent platform providers are accountable for platform providers unlawful personal data processing in platform ecosystems. The findings can have a huge impact to account actors for personal data misusage in platform ecosystems and, thus, the protection of personal liberty and rights in such socio-technical systems.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100018"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000111/pdfft?md5=973ab4afa4f2d1cc53f217345202fb68&pid=1-s2.0-S2666659621000111-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Responsible governance of civilian unmanned aerial vehicle (UAV) innovations for Indian crop insurance applications","authors":"Anjan Chamuah, Rajbeer Singh","doi":"10.1016/j.jrt.2022.100025","DOIUrl":"10.1016/j.jrt.2022.100025","url":null,"abstract":"<div><p>Civilian Unmanned Aerial Vehicle (UAV) is an emerging technology in Indian crop insurance applications. The technology is new to an agro-based country like India with diverse socio-cultural norms and values. However, in such a diverse democracy, UAV governance and deployment pose a significant challenge and risk. In other words, charting out a proper framework for a risk-free implementation of this governance has emerged as a leading research topic in the concerned discipline. In innovations literature, Responsible Innovation (RI) takes care of emerging technology governance; thus, RI becomes significant as a theoretical framework. The study is intended to find out <strong>how the framework of RI enables responsible governance and also who are the main actors and stakeholders of governance and deployment of civilian UAVs in crop insurance applications in India</strong>? An in-depth interview method and snowball sampling technique have been employed to identify interviewees from Delhi, Gujarat, and Rajasthan. Findings suggest that civilian UAV is effective in handling risk, crop damage assessment, and claim settlement. The RI approach, through its dimensions and steps, enables equal participation and deliberation among all the actors and stakeholders of UAV governance that consists of government bodies, research organizations, insurance agencies, local administration, and farmers. Effective regulations, adhering to accountability, and responsibility promote responsible governance.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100025"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000026/pdfft?md5=6fcb8e9ad2745a0da20c9119b0d88eeb&pid=1-s2.0-S2666659622000026-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48673717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction","authors":"Laura Crompton","doi":"10.1016/j.jrt.2021.100013","DOIUrl":"10.1016/j.jrt.2021.100013","url":null,"abstract":"<div><p>AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100013"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000068/pdfft?md5=e8634dde79377a2caf85de3bcbdd39b1&pid=1-s2.0-S2666659621000068-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48110664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones
{"title":"Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable","authors":"Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones","doi":"10.1016/j.jrt.2021.100017","DOIUrl":"10.1016/j.jrt.2021.100017","url":null,"abstract":"<div><p>Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100017"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100010X/pdfft?md5=209e9bba6d0a6ab1de48f2f469aae35b&pid=1-s2.0-S266665962100010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42673610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelika Adensamer , Rita Gsenger , Lukas Daniel Klausner
{"title":"“Computer says no”: Algorithmic decision support and organisational responsibility","authors":"Angelika Adensamer , Rita Gsenger , Lukas Daniel Klausner","doi":"10.1016/j.jrt.2021.100014","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100014","url":null,"abstract":"<div><p>Algorithmic decision support is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people’s lives. Its use raises questions, among others, about accountability, transparency and responsibility. While there is substantial research on the issue of algorithmic systems and responsibility in general, there is little to no prior research on <em>organisational</em> responsibility and its attribution. Our article aims to fill that gap; we give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100014"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100007X/pdfft?md5=66c50c16e31d2aebf63b1f07b3c84789&pid=1-s2.0-S266665962100007X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72106858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency","authors":"Florian Cech","doi":"10.1016/j.jrt.2021.100015","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100015","url":null,"abstract":"<div><p>The wicked challenge of designing accountability measures aimed at improving algorithmic accountability demands human-centered approaches. Based on one of the most common definitions of accountability as the relationship between an actor and a forum, this article presents an analytic lens in the form of actor and forum agency, through which the accountability process can be analysed. Two case studies - the Austrian Public Employment Service’s AMAS system and the EnerCoach energy accounting system, serve as examples to an analysis of accountability based on the agency of the stakeholders. Developed through the comparison of the two systems, the Algorithmic Accountability Agency Framework (A<sup>3</sup> framework) aimed at supporting the analysis and the improvement of agency throughout the four steps of the accountability process is presented and discussed.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100015"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000081/pdfft?md5=0c4add516911afa8f58f6e10d59434da&pid=1-s2.0-S2666659621000081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72107121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner
{"title":"Causality-based accountability mechanisms for socio-technical systems","authors":"Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner","doi":"10.1016/j.jrt.2021.100016","DOIUrl":"10.1016/j.jrt.2021.100016","url":null,"abstract":"<div><p>With the rapid deployment of socio-technical systems into all aspects of daily life, we need to be prepared for their failures. It is inherently impractical to specify all the lawful interactions of these systems, in turn, the possibility of invalid interactions cannot be excluded at design time. As modern systems might harm people, or compromise assets if they fail, they ought to be accountable. Accountability is an interdisciplinary concept that cannot be easily described as a holistic technical property of a system. Thus, in this paper, we propose a bottom-up approach to enable accountability using goal-specific accountability mechanisms. Each mechanism provides forensic capabilities that help us to identify the root cause for a specific type of events, both to eliminate the underlying (technical) problem and to assign blame. This paper presents the different ingredients that are required to design and build an accountability mechanism and focuses on the technical and practical utilization of causality theories as a cornerstone to achieve our goal. To the best of our knowledge, the literature lacks a systematic methodology to envision, design, and implement abilities that promote accountability in systems. With a case study from the area of microservice-based systems, which we deem representative of modern complex systems, we demonstrate the effectiveness of the approach as a whole. We show that it is generic enough to accommodate different accountability goals and mechanisms.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100016"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000093/pdfft?md5=70a06e5c6bb7727c37ce86ad9a9191e0&pid=1-s2.0-S2666659621000093-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46987346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Engineers in Harmonising Human Values for AI Systems Design","authors":"Steven Umbrello","doi":"10.21203/rs.3.rs-709596/v1","DOIUrl":"https://doi.org/10.21203/rs.3.rs-709596/v1","url":null,"abstract":"\u0000 Most engineers work within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41705814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"‘Toward a Global Social Contract for Trade’ - a Rawlsian approach to Blockchain Systems Design and Responsible Trade Facilitation in the New Bretton Woods era","authors":"Arnold Lim , Enrong Pan","doi":"10.1016/j.jrt.2021.100011","DOIUrl":"10.1016/j.jrt.2021.100011","url":null,"abstract":"<div><p>Imminent changes to the international monetary system alongside a shift toward more egalitarian principles of justice in commercial contracts for trade are now taking place. Such changes however do not sufficiently account for circumstances of hardship, or black-swan events such as COVID-19, whereby the relative losers of trading arrangements should continue to receive outcomes which are not only efficient, but also fair and resilient. We argue that the ‘Society-in-the-Loop’ (SITL) social contract paradigm, in conjunction with the use of Strategic Responsible Innovation Management (StRIM), can together provide a solution for improving distributive justice in trade. Through collaboration with a locally based trade facilitation company, we describe the innovation-planning phase of a blockchain smart contract solution based on Derek Leben's idea of a ‘Rawlsian Algorithm’ (2017). It is demonstrated how this can be used to strengthen the algorithmic fairness of commercial contract implementation in accordance with existing ISO 20022 standards. Since currently no formal design framework exists for modeling blockchain oriented software (BOS), an agile development approach is adopted which takes account of the substantial difference between traditional software development and smart contracts. This method involves the construction of UML Use Case, Sequence, and Class diagrams, with a view to blockchain specificities. Evaluation and feedback from the company is also considered.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"6 ","pages":"Article 100011"},"PeriodicalIF":0.0,"publicationDate":"2021-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2021.100011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47201164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}