Regulating Deep Fakes in the Artificial Intelligence Act

Mateusz Łabuz
{"title":"Regulating Deep Fakes in the Artificial Intelligence Act","authors":"Mateusz Łabuz","doi":"10.60097/acig/162856","DOIUrl":null,"url":null,"abstract":"The Artificial Intelligence Act (AI Act) may be a milestone in the regulation of artificial intelligence by the European Union. The regulatory framework proposed by the European Commission has the potential to serve as a global benchmark and strengthen the position of the EU as one of the main players on the technology market. One of the components of the draft regulation are the provisions on deep fakes, which include a relevant definition, risk category classification and transparency obligations. Deep fakes rightly arouse controversy and are a complex phenomenon. When leveraged for negative purposes, they significantly increase the risk of political manipulation, and at the same time contribute to disinformation, undermining trust in information and the media. The AI Act may strengthen the protection of citizens against some of the negative consequences of misusing deep fakes, although the impact of the regulatory framework in its current form will be limited due to the specificity of their creation and dissemination. The effectiveness of the provisions will depend not only on enforcement capabilities, but also on the precision of phrasing provisions to prevent misinterpretation and deliberate abuse of exceptions. At the same time, the AI Act will not cover a significant portion of deep fakes, which, due to the malicious intentions of their creators, will not be subject to the transparency obligations. This study analyses provisions related to deep fakes in the AI Act and proposes improvements that will take into account the specificity of this phenomenon to a greater extent.","PeriodicalId":123092,"journal":{"name":"Applied Cybersecurity & Internet Governance","volume":"84 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Cybersecurity & Internet Governance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.60097/acig/162856","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The Artificial Intelligence Act (AI Act) may be a milestone in the regulation of artificial intelligence by the European Union. The regulatory framework proposed by the European Commission has the potential to serve as a global benchmark and strengthen the position of the EU as one of the main players on the technology market. One of the components of the draft regulation are the provisions on deep fakes, which include a relevant definition, risk category classification and transparency obligations. Deep fakes rightly arouse controversy and are a complex phenomenon. When leveraged for negative purposes, they significantly increase the risk of political manipulation, and at the same time contribute to disinformation, undermining trust in information and the media. The AI Act may strengthen the protection of citizens against some of the negative consequences of misusing deep fakes, although the impact of the regulatory framework in its current form will be limited due to the specificity of their creation and dissemination. The effectiveness of the provisions will depend not only on enforcement capabilities, but also on the precision of phrasing provisions to prevent misinterpretation and deliberate abuse of exceptions. At the same time, the AI Act will not cover a significant portion of deep fakes, which, due to the malicious intentions of their creators, will not be subject to the transparency obligations. This study analyses provisions related to deep fakes in the AI Act and proposes improvements that will take into account the specificity of this phenomenon to a greater extent.
人工智能法》对深度造假进行监管
人工智能法》(AI Act)可能是欧盟人工智能监管方面的一个里程碑。欧盟委员会提出的监管框架有可能成为全球基准,并加强欧盟作为技术市场主要参与者之一的地位。该法规草案的组成部分之一是关于深度假货的规定,其中包括相关定义、风险类别分类和透明度义务。深度伪造理所当然会引起争议,而且是一种复杂的现象。当被用于负面目的时,它们会大大增加政治操纵的风险,同时助长虚假信息,破坏对信息和媒体的信任。人工智能法》可能会加强对公民的保护,使其免受滥用深度假新闻所带来的某些负面影响,但由于深度假新闻的制作和传播具有特殊性,目前形式的监管框架所产生的影响将是有限的。条款的有效性不仅取决于执法能力,还取决于条款措辞的准确性,以防止误解和故意滥用例外情况。同时,《人工智能法》将不会涵盖很大一部分深度假冒产品,这些产品由于其创造者的恶意,将不受透明度义务的约束。本研究分析了《人工智能法》中与深度伪造相关的条款,并提出了改进建议,这些建议将在更大程度上考虑到这一现象的特殊性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信