Joshua Krook, Peter Winter, John Downer, Jan Blockx
{"title":"对欧盟(EU)和英国(UK)人工智能(AI)透明度法律的系统文献综述:人工智能透明度治理的社会-法律方法","authors":"Joshua Krook, Peter Winter, John Downer, Jan Blockx","doi":"10.1007/s43681-025-00674-z","DOIUrl":null,"url":null,"abstract":"<div><p>This systematic literature review examines AI transparency laws and governance in the European Union (EU) and the United Kingdom (UK) through a socio-legal lens. The study highlights the importance of transparency in AI systems as a key regulatory focus globally, driven by the need to address the risks posed by opaque, ‘black box’ algorithms that can lead to unfair outcomes, privacy violations, and a lack of accountability. It identifies significant differences between the EU and UK approaches to AI regulation post-Brexit, with the EU's tiered, risk-based framework and the UK's more flexible, sector-specific strategy. The review categorises the literature into five themes: <i>the necessity of AI transparency</i>, <i>challenges in achieving transparency</i>, <i>techniques for governing transparency</i>, <i>laws governing AI transparency</i>, and <i>soft law governance toolkits</i>. The findings suggest that while technical solutions like eXplainable AI (XAI) and counterfactual methodologies are widely discussed, there is a critical need for a comprehensive, whole-of-organisation approach to embedding AI transparency within the cultural and operational fabric of organisations. This approach is argued to be more effective than top-down mandates, fostering an internal culture where transparency is valued and sustained. The study concludes by advocating for the development of AI transparency toolkits, particularly for small and medium-sized enterprises (SMEs), to address sociotechnical barriers and ensure that transparency in AI systems is practically implemented across various organisational contexts. These toolkits would serve as practical guides for companies to adopt best practices in AI transparency, aligning with both legal requirements and broader sociocultural considerations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4069 - 4090"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A systematic literature review of artificial intelligence (AI) transparency laws in the European Union (EU) and United Kingdom (UK): a socio-legal approach to AI transparency governance\",\"authors\":\"Joshua Krook, Peter Winter, John Downer, Jan Blockx\",\"doi\":\"10.1007/s43681-025-00674-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This systematic literature review examines AI transparency laws and governance in the European Union (EU) and the United Kingdom (UK) through a socio-legal lens. The study highlights the importance of transparency in AI systems as a key regulatory focus globally, driven by the need to address the risks posed by opaque, ‘black box’ algorithms that can lead to unfair outcomes, privacy violations, and a lack of accountability. It identifies significant differences between the EU and UK approaches to AI regulation post-Brexit, with the EU's tiered, risk-based framework and the UK's more flexible, sector-specific strategy. The review categorises the literature into five themes: <i>the necessity of AI transparency</i>, <i>challenges in achieving transparency</i>, <i>techniques for governing transparency</i>, <i>laws governing AI transparency</i>, and <i>soft law governance toolkits</i>. The findings suggest that while technical solutions like eXplainable AI (XAI) and counterfactual methodologies are widely discussed, there is a critical need for a comprehensive, whole-of-organisation approach to embedding AI transparency within the cultural and operational fabric of organisations. This approach is argued to be more effective than top-down mandates, fostering an internal culture where transparency is valued and sustained. The study concludes by advocating for the development of AI transparency toolkits, particularly for small and medium-sized enterprises (SMEs), to address sociotechnical barriers and ensure that transparency in AI systems is practically implemented across various organisational contexts. These toolkits would serve as practical guides for companies to adopt best practices in AI transparency, aligning with both legal requirements and broader sociocultural considerations.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 4\",\"pages\":\"4069 - 4090\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00674-z\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00674-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A systematic literature review of artificial intelligence (AI) transparency laws in the European Union (EU) and United Kingdom (UK): a socio-legal approach to AI transparency governance
This systematic literature review examines AI transparency laws and governance in the European Union (EU) and the United Kingdom (UK) through a socio-legal lens. The study highlights the importance of transparency in AI systems as a key regulatory focus globally, driven by the need to address the risks posed by opaque, ‘black box’ algorithms that can lead to unfair outcomes, privacy violations, and a lack of accountability. It identifies significant differences between the EU and UK approaches to AI regulation post-Brexit, with the EU's tiered, risk-based framework and the UK's more flexible, sector-specific strategy. The review categorises the literature into five themes: the necessity of AI transparency, challenges in achieving transparency, techniques for governing transparency, laws governing AI transparency, and soft law governance toolkits. The findings suggest that while technical solutions like eXplainable AI (XAI) and counterfactual methodologies are widely discussed, there is a critical need for a comprehensive, whole-of-organisation approach to embedding AI transparency within the cultural and operational fabric of organisations. This approach is argued to be more effective than top-down mandates, fostering an internal culture where transparency is valued and sustained. The study concludes by advocating for the development of AI transparency toolkits, particularly for small and medium-sized enterprises (SMEs), to address sociotechnical barriers and ensure that transparency in AI systems is practically implemented across various organisational contexts. These toolkits would serve as practical guides for companies to adopt best practices in AI transparency, aligning with both legal requirements and broader sociocultural considerations.