{"title":"Transparency-Check: An Instrument for the Study and Design of Transparency in AI-based Personalization Systems","authors":"Laura Schelenz, Avi Segal, Oduma Adelio, K. Gal","doi":"10.1145/3636508","DOIUrl":null,"url":null,"abstract":"As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This paper provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations) and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"6 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal on Responsible Computing","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.1145/3636508","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This paper provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations) and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.