A tool supported framework for the assessment of algorithmic accountability

Eleni Tagiou, Y. Kanellopoulos, Christos Aridas, C. Makris
{"title":"A tool supported framework for the assessment of algorithmic accountability","authors":"Eleni Tagiou, Y. Kanellopoulos, Christos Aridas, C. Makris","doi":"10.1109/IISA.2019.8900715","DOIUrl":null,"url":null,"abstract":"Algorithmic decision making is now being used by many organizations and businesses, and in crucial areas that directly affect peoples’ lives. Thus the importance for us to be able to control their decisions and to avoid irreversible errors is rapidly increasing. Evaluating an algorithmic system and the organization that utilizes it in terms of accountability and transparency bears certain challenges. Merely these are the lack of a widely accepted evaluation standard and the tendency of organizations that employ such systems to avoid disclosing any relevant information about them. Our thesis is that the mandate for transparency and accountability should be applicable to both systems and organizations. In this paper we present an evaluation framework regarding the transparency of algorithmic systems by focusing on the way these have been implemented. This framework also evaluates the maturity of the organizations that utilize these systems and their ability to hold them accountable. In order to validate our framework we applied it on a classification algorithm created and utilized by a large financial institution. The main insight for us was that when organizations create their algorithmic systems, accountability and transparency might be indeed recognized as values. However, they are either taken into account at a later stage and from the perspective of control or they are simply neglected. The value of frameworks like the one presented in this paper is that they act as check-lists providing a set of best-practices to organizations in order to cater for accountable algorithmic systems at an early stage of their creation.","PeriodicalId":371385,"journal":{"name":"2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISA.2019.8900715","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Algorithmic decision making is now being used by many organizations and businesses, and in crucial areas that directly affect peoples’ lives. Thus the importance for us to be able to control their decisions and to avoid irreversible errors is rapidly increasing. Evaluating an algorithmic system and the organization that utilizes it in terms of accountability and transparency bears certain challenges. Merely these are the lack of a widely accepted evaluation standard and the tendency of organizations that employ such systems to avoid disclosing any relevant information about them. Our thesis is that the mandate for transparency and accountability should be applicable to both systems and organizations. In this paper we present an evaluation framework regarding the transparency of algorithmic systems by focusing on the way these have been implemented. This framework also evaluates the maturity of the organizations that utilize these systems and their ability to hold them accountable. In order to validate our framework we applied it on a classification algorithm created and utilized by a large financial institution. The main insight for us was that when organizations create their algorithmic systems, accountability and transparency might be indeed recognized as values. However, they are either taken into account at a later stage and from the perspective of control or they are simply neglected. The value of frameworks like the one presented in this paper is that they act as check-lists providing a set of best-practices to organizations in order to cater for accountable algorithmic systems at an early stage of their creation.
一个由工具支持的评估算法问责制的框架
算法决策现在被许多组织和企业使用,并在直接影响人们生活的关键领域使用。因此,对我们来说,能够控制他们的决定和避免不可逆转的错误的重要性正在迅速增加。在问责制和透明度方面评估一个算法系统和使用它的组织面临一定的挑战。这些问题仅仅是缺乏一个被广泛接受的评价标准,以及采用这种系统的组织倾向于避免披露有关这些系统的任何相关信息。我们的论点是,透明度和问责制的任务应该适用于系统和组织。在本文中,我们通过关注算法系统的实现方式,提出了一个关于算法系统透明度的评估框架。这个框架还评估了利用这些系统的组织的成熟度,以及他们对这些系统负责的能力。为了验证我们的框架,我们将其应用于一个由大型金融机构创建和使用的分类算法。我们的主要见解是,当组织创建他们的算法系统时,问责制和透明度可能确实被视为价值观。然而,它们要么在后期阶段从控制的角度考虑,要么被简单地忽略。像本文中提出的框架的价值在于,它们作为检查清单,为组织提供了一组最佳实践,以便在创建的早期阶段迎合负责任的算法系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信