Sophia Worth, Ben Snaith, Arunav Das, Gefion Thuermer, Elena Simperl
{"title":"人工智能数据透明度:从人工智能事件的角度进行探讨","authors":"Sophia Worth, Ben Snaith, Arunav Das, Gefion Thuermer, Elena Simperl","doi":"arxiv-2409.03307","DOIUrl":null,"url":null,"abstract":"Knowing more about the data used to build AI systems is critical for allowing\ndifferent stakeholders to play their part in ensuring responsible and\nappropriate deployment and use. Meanwhile, a 2023 report shows that data\ntransparency lags significantly behind other areas of AI transparency in\npopular foundation models. In this research, we sought to build on these\nfindings, exploring the status of public documentation about data practices\nwithin AI systems generating public concern. Our findings demonstrate that low data transparency persists across a wide\nrange of systems, and further that issues of transparency and explainability at\nmodel- and system- level create barriers for investigating data transparency\ninformation to address public concerns about AI systems. We highlight a need to\ndevelop systematic ways of monitoring AI data transparency that account for the\ndiversity of AI system types, and for such efforts to build on further\nunderstanding of the needs of those both supplying and using data transparency\ninformation.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI data transparency: an exploration through the lens of AI incidents\",\"authors\":\"Sophia Worth, Ben Snaith, Arunav Das, Gefion Thuermer, Elena Simperl\",\"doi\":\"arxiv-2409.03307\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Knowing more about the data used to build AI systems is critical for allowing\\ndifferent stakeholders to play their part in ensuring responsible and\\nappropriate deployment and use. Meanwhile, a 2023 report shows that data\\ntransparency lags significantly behind other areas of AI transparency in\\npopular foundation models. In this research, we sought to build on these\\nfindings, exploring the status of public documentation about data practices\\nwithin AI systems generating public concern. Our findings demonstrate that low data transparency persists across a wide\\nrange of systems, and further that issues of transparency and explainability at\\nmodel- and system- level create barriers for investigating data transparency\\ninformation to address public concerns about AI systems. We highlight a need to\\ndevelop systematic ways of monitoring AI data transparency that account for the\\ndiversity of AI system types, and for such efforts to build on further\\nunderstanding of the needs of those both supplying and using data transparency\\ninformation.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.03307\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AI data transparency: an exploration through the lens of AI incidents
Knowing more about the data used to build AI systems is critical for allowing
different stakeholders to play their part in ensuring responsible and
appropriate deployment and use. Meanwhile, a 2023 report shows that data
transparency lags significantly behind other areas of AI transparency in
popular foundation models. In this research, we sought to build on these
findings, exploring the status of public documentation about data practices
within AI systems generating public concern. Our findings demonstrate that low data transparency persists across a wide
range of systems, and further that issues of transparency and explainability at
model- and system- level create barriers for investigating data transparency
information to address public concerns about AI systems. We highlight a need to
develop systematic ways of monitoring AI data transparency that account for the
diversity of AI system types, and for such efforts to build on further
understanding of the needs of those both supplying and using data transparency
information.