G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic
{"title":"审计算法:经验教训和数据最小化的风险","authors":"G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic","doi":"10.1145/3375627.3375852","DOIUrl":null,"url":null,"abstract":"In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization\",\"authors\":\"G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic\",\"doi\":\"10.1145/3375627.3375852\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies\",\"PeriodicalId\":93612,\"journal\":{\"name\":\"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-02-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3375627.3375852\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3375852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization
In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies