Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones
{"title":"Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable","authors":"Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones","doi":"10.1016/j.jrt.2021.100017","DOIUrl":"10.1016/j.jrt.2021.100017","url":null,"abstract":"<div><p>Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100017"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100010X/pdfft?md5=209e9bba6d0a6ab1de48f2f469aae35b&pid=1-s2.0-S266665962100010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42673610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelika Adensamer , Rita Gsenger , Lukas Daniel Klausner
{"title":"“Computer says no”: Algorithmic decision support and organisational responsibility","authors":"Angelika Adensamer , Rita Gsenger , Lukas Daniel Klausner","doi":"10.1016/j.jrt.2021.100014","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100014","url":null,"abstract":"<div><p>Algorithmic decision support is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people’s lives. Its use raises questions, among others, about accountability, transparency and responsibility. While there is substantial research on the issue of algorithmic systems and responsibility in general, there is little to no prior research on <em>organisational</em> responsibility and its attribution. Our article aims to fill that gap; we give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100014"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100007X/pdfft?md5=66c50c16e31d2aebf63b1f07b3c84789&pid=1-s2.0-S266665962100007X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72106858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency","authors":"Florian Cech","doi":"10.1016/j.jrt.2021.100015","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100015","url":null,"abstract":"<div><p>The wicked challenge of designing accountability measures aimed at improving algorithmic accountability demands human-centered approaches. Based on one of the most common definitions of accountability as the relationship between an actor and a forum, this article presents an analytic lens in the form of actor and forum agency, through which the accountability process can be analysed. Two case studies - the Austrian Public Employment Service’s AMAS system and the EnerCoach energy accounting system, serve as examples to an analysis of accountability based on the agency of the stakeholders. Developed through the comparison of the two systems, the Algorithmic Accountability Agency Framework (A<sup>3</sup> framework) aimed at supporting the analysis and the improvement of agency throughout the four steps of the accountability process is presented and discussed.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100015"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000081/pdfft?md5=0c4add516911afa8f58f6e10d59434da&pid=1-s2.0-S2666659621000081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72107121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner
{"title":"Causality-based accountability mechanisms for socio-technical systems","authors":"Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner","doi":"10.1016/j.jrt.2021.100016","DOIUrl":"10.1016/j.jrt.2021.100016","url":null,"abstract":"<div><p>With the rapid deployment of socio-technical systems into all aspects of daily life, we need to be prepared for their failures. It is inherently impractical to specify all the lawful interactions of these systems, in turn, the possibility of invalid interactions cannot be excluded at design time. As modern systems might harm people, or compromise assets if they fail, they ought to be accountable. Accountability is an interdisciplinary concept that cannot be easily described as a holistic technical property of a system. Thus, in this paper, we propose a bottom-up approach to enable accountability using goal-specific accountability mechanisms. Each mechanism provides forensic capabilities that help us to identify the root cause for a specific type of events, both to eliminate the underlying (technical) problem and to assign blame. This paper presents the different ingredients that are required to design and build an accountability mechanism and focuses on the technical and practical utilization of causality theories as a cornerstone to achieve our goal. To the best of our knowledge, the literature lacks a systematic methodology to envision, design, and implement abilities that promote accountability in systems. With a case study from the area of microservice-based systems, which we deem representative of modern complex systems, we demonstrate the effectiveness of the approach as a whole. We show that it is generic enough to accommodate different accountability goals and mechanisms.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100016"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000093/pdfft?md5=70a06e5c6bb7727c37ce86ad9a9191e0&pid=1-s2.0-S2666659621000093-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46987346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Engineers in Harmonising Human Values for AI Systems Design","authors":"Steven Umbrello","doi":"10.21203/rs.3.rs-709596/v1","DOIUrl":"https://doi.org/10.21203/rs.3.rs-709596/v1","url":null,"abstract":"\u0000 Most engineers work within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41705814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"‘Toward a Global Social Contract for Trade’ - a Rawlsian approach to Blockchain Systems Design and Responsible Trade Facilitation in the New Bretton Woods era","authors":"Arnold Lim , Enrong Pan","doi":"10.1016/j.jrt.2021.100011","DOIUrl":"10.1016/j.jrt.2021.100011","url":null,"abstract":"<div><p>Imminent changes to the international monetary system alongside a shift toward more egalitarian principles of justice in commercial contracts for trade are now taking place. Such changes however do not sufficiently account for circumstances of hardship, or black-swan events such as COVID-19, whereby the relative losers of trading arrangements should continue to receive outcomes which are not only efficient, but also fair and resilient. We argue that the ‘Society-in-the-Loop’ (SITL) social contract paradigm, in conjunction with the use of Strategic Responsible Innovation Management (StRIM), can together provide a solution for improving distributive justice in trade. Through collaboration with a locally based trade facilitation company, we describe the innovation-planning phase of a blockchain smart contract solution based on Derek Leben's idea of a ‘Rawlsian Algorithm’ (2017). It is demonstrated how this can be used to strengthen the algorithmic fairness of commercial contract implementation in accordance with existing ISO 20022 standards. Since currently no formal design framework exists for modeling blockchain oriented software (BOS), an agile development approach is adopted which takes account of the substantial difference between traditional software development and smart contracts. This method involves the construction of UML Use Case, Sequence, and Class diagrams, with a view to blockchain specificities. Evaluation and feedback from the company is also considered.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"6 ","pages":"Article 100011"},"PeriodicalIF":0.0,"publicationDate":"2021-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2021.100011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47201164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Responsible research and innovation in practice an exploratory assessment of Key Performance Indicators (KPIs) in a Nanomedicine Project","authors":"Zenlin Kwee, Emad Yaghmaei, Steven Flipse","doi":"10.1016/j.jrt.2021.100008","DOIUrl":"10.1016/j.jrt.2021.100008","url":null,"abstract":"<div><p>While originally intended to transform research and innovation practice, the concept of responsible research and innovation (RRI) has largely remained a theoretical, policy-oriented construct, thereby engendering a perception that RRI indicators are very different from organizational or business indicators. As there is currently limited experience with RRI in businesses, in an attempt to gain more insights into RRI in practice, this paper focuses on an exploratory assessment of key performance indicators (KPIs) in a nanomedicine project. Based on correspondence analysis, we visually demonstrate associations among KPIs of RRI dimensions and of organizational ongoing R&D dimensions implying that these two indicators are not entirely different from each other and may even be potentially aligned. This finding may stimulate the motives of the RRI uptake in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"5 ","pages":"Article 100008"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2021.100008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43445807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ellen-Marie Forsberg , Erik Thorstensen , Flávia Dias Casagrande , Torhild Holthe , Liv Halvorsrud , Anne Lund , Evi Zouganeli
{"title":"Is RRI a new R&I logic? A reflection from an integrated RRI project","authors":"Ellen-Marie Forsberg , Erik Thorstensen , Flávia Dias Casagrande , Torhild Holthe , Liv Halvorsrud , Anne Lund , Evi Zouganeli","doi":"10.1016/j.jrt.2020.100007","DOIUrl":"https://doi.org/10.1016/j.jrt.2020.100007","url":null,"abstract":"<div><p>This article presents an analysis of a project in the field of assisted living technologies (ALT) for older adults where Responsible Research and Innovation (RRI) is used as an overall approach to the research and technology development work. Taking the project's three literature reviews - conducted in the fields of health science oriented towards occupational therapy, ICT research and development, and RRI - as starting points it applies perspectives from institutional logics to analyse the tension between RRI as an overall research and innovation (R&I) logic versus a disciplinary logic. This tension complicates the implementation of RRI, and we argue for giving this question more visibility. The article concludes that this project, from the funder's side and the project leader's side, was intended to be an example of research and technology development carried out within a new RRI R&I logic, but that it in large parts was conducted as a multidisciplinary project with RRI as a quasi-disciplinary logic in part in parallel with and in part in conflict with other logics in the project.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"5 ","pages":"Article 100007"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2020.100007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72121749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facebook's Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons","authors":"Sally A. Applin , Catherine Flick","doi":"10.1016/j.jrt.2021.100010","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100010","url":null,"abstract":"<div><p>Nearly every week, a technology company is introducing a new surveillance technology, varying from applying facial recognition to observing and cataloguing behaviours of the public in the Commons and private spaces, to listening and recording what we say, or mapping what we do, where we go, and who we're with—or as much of these facets of our lives as can be accessed. As such, the general public writ-large has had to wrestle with the colonization of publicly funded space, and the outcomes to each of our personal lives as a result of the massive harvesting and storing of our data, and the potential machine learning and processing applied to that data. Facebook, once content to harvest our data through its website, cookies, and apps on mobile phones and computers, has now planned to follow us more deeply into the Commons by developing new mapping technology combined with smart camera equipped Augmented Reality (AR) eyeglasses, that will track, render and record the Commons—and us with it. The resulting data will privately benefit Facebook's continued goal to expand its worldwide reach and growth. In this paper, we examine the ethical implications of Facebook's Project Aria research pilot through the perspectives of Responsible Innovation, comparing both existing understandings of Responsible Research and Innovation and Facebook's own Responsible Innovation Principles; we contextualise Project Aria within the Commons through applying current social multi-dimensional communications theory to understand the extensive socio-technological implications of Project Aria within society and culture; and we address the potentially serious consequences of the Facebook Project Aria experiment, inspiring countless other companies to shift their focus to compete with Project Aria, or beat it to the consumer marketplace.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"5 ","pages":"Article 100010"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2021.100010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72121751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}