Katarzyna Szymielewicz, A. Bacciarelli, F. Hidvégi, Agata Foryciarz, Soizic Pénicaud, M. Spielkamp
{"title":"Where do algorithmic accountability and explainability frameworks take us in the real world?: from theory to practice","authors":"Katarzyna Szymielewicz, A. Bacciarelli, F. Hidvégi, Agata Foryciarz, Soizic Pénicaud, M. Spielkamp","doi":"10.1145/3351095.3375683","DOIUrl":null,"url":null,"abstract":"This hands-on session takes academic concepts and their formulation in policy initiatives around algorithmic accountability and explainability and tests them against real cases. In small groups we will (1) test selected frameworks on algorithmic accountability and explainability against a concrete case study (that likely constitutes a human rights violation) and (2) test different formats to explain important aspects of an automated decision-making process (such as input data, type of an algorithm used, design decisions and technical parameters, expected outcomes) to various audiences (end users, affected communities, watchdog organisations, public sector agencies and regulators). We invite participants with various backgrounds: researchers, technologists, human rights advocates, public servants and designers.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3375683","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This hands-on session takes academic concepts and their formulation in policy initiatives around algorithmic accountability and explainability and tests them against real cases. In small groups we will (1) test selected frameworks on algorithmic accountability and explainability against a concrete case study (that likely constitutes a human rights violation) and (2) test different formats to explain important aspects of an automated decision-making process (such as input data, type of an algorithm used, design decisions and technical parameters, expected outcomes) to various audiences (end users, affected communities, watchdog organisations, public sector agencies and regulators). We invite participants with various backgrounds: researchers, technologists, human rights advocates, public servants and designers.