人工智能数据透明度:从人工智能事件的角度进行探讨

Sophia Worth, Ben Snaith, Arunav Das, Gefion Thuermer, Elena Simperl
{"title":"人工智能数据透明度:从人工智能事件的角度进行探讨","authors":"Sophia Worth, Ben Snaith, Arunav Das, Gefion Thuermer, Elena Simperl","doi":"arxiv-2409.03307","DOIUrl":null,"url":null,"abstract":"Knowing more about the data used to build AI systems is critical for allowing\ndifferent stakeholders to play their part in ensuring responsible and\nappropriate deployment and use. Meanwhile, a 2023 report shows that data\ntransparency lags significantly behind other areas of AI transparency in\npopular foundation models. In this research, we sought to build on these\nfindings, exploring the status of public documentation about data practices\nwithin AI systems generating public concern. Our findings demonstrate that low data transparency persists across a wide\nrange of systems, and further that issues of transparency and explainability at\nmodel- and system- level create barriers for investigating data transparency\ninformation to address public concerns about AI systems. We highlight a need to\ndevelop systematic ways of monitoring AI data transparency that account for the\ndiversity of AI system types, and for such efforts to build on further\nunderstanding of the needs of those both supplying and using data transparency\ninformation.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI data transparency: an exploration through the lens of AI incidents\",\"authors\":\"Sophia Worth, Ben Snaith, Arunav Das, Gefion Thuermer, Elena Simperl\",\"doi\":\"arxiv-2409.03307\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Knowing more about the data used to build AI systems is critical for allowing\\ndifferent stakeholders to play their part in ensuring responsible and\\nappropriate deployment and use. Meanwhile, a 2023 report shows that data\\ntransparency lags significantly behind other areas of AI transparency in\\npopular foundation models. In this research, we sought to build on these\\nfindings, exploring the status of public documentation about data practices\\nwithin AI systems generating public concern. Our findings demonstrate that low data transparency persists across a wide\\nrange of systems, and further that issues of transparency and explainability at\\nmodel- and system- level create barriers for investigating data transparency\\ninformation to address public concerns about AI systems. We highlight a need to\\ndevelop systematic ways of monitoring AI data transparency that account for the\\ndiversity of AI system types, and for such efforts to build on further\\nunderstanding of the needs of those both supplying and using data transparency\\ninformation.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.03307\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

更多地了解用于构建人工智能系统的数据,对于让不同的利益相关者在确保负责任和适当的部署和使用方面发挥作用至关重要。同时,2023 年的一份报告显示,在流行的基金会模式中,数据透明度明显落后于其他领域的人工智能透明度。在这项研究中,我们试图在这些发现的基础上,探索引起公众关注的人工智能系统中有关数据实践的公共文件状况。我们的研究结果表明,在更广泛的系统中都存在数据透明度低的问题,而且在模型和系统层面的透明度和可解释性问题为调查数据透明度信息以解决公众对人工智能系统的担忧制造了障碍。我们强调,有必要开发系统化的方法来监测人工智能数据透明度,以考虑到人工智能系统类型的多样性,并在进一步了解数据透明度信息提供者和使用者需求的基础上开展此类工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI data transparency: an exploration through the lens of AI incidents
Knowing more about the data used to build AI systems is critical for allowing different stakeholders to play their part in ensuring responsible and appropriate deployment and use. Meanwhile, a 2023 report shows that data transparency lags significantly behind other areas of AI transparency in popular foundation models. In this research, we sought to build on these findings, exploring the status of public documentation about data practices within AI systems generating public concern. Our findings demonstrate that low data transparency persists across a wide range of systems, and further that issues of transparency and explainability at model- and system- level create barriers for investigating data transparency information to address public concerns about AI systems. We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types, and for such efforts to build on further understanding of the needs of those both supplying and using data transparency information.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信