打击深度造假:多lstm和区块链作为数字媒体真实性的证明

Christopher Chun Ki Chan, Vimal Kumar, Steven Delaney, Munkhjargal Gochoo
{"title":"打击深度造假:多lstm和区块链作为数字媒体真实性的证明","authors":"Christopher Chun Ki Chan, Vimal Kumar, Steven Delaney, Munkhjargal Gochoo","doi":"10.1109/AI4G50087.2020.9311067","DOIUrl":null,"url":null,"abstract":"Malicious use of deep learning algorithms has allowed the proliferation of high realism fake digital content such as text, images, and videos, to exist on the internet as readily available and accessible consumable content. False information provided through algorithmically modified footage, images, audios, and videos (known as deepfakes), coupled with the virality of social networks, may cause major social unrest. The emergence of misinformation from fabricated digital content suggests the necessity for anti-disinformation methods such as deepfake detection algorithms or immutable metadata in order to verify the validity of digital content. Permissioned blockchain, notably Hyperledger Fabric 2.0, coupled with LSTMs for audio/video/descriptive captioning is a step towards providing a feasible tool for combating deepfake media. Original content would require the original artist attestation of untampered data. The smart contract combines a varied multiple LSTM networks into a process that allows for the tracing and tracking of a digital content's historical provenance. The result is a theoretical framework that enables proof of authenticity (PoA) for digital media using a decentralized blockchain using multiple LSTMs as a deep encoder for creating unique discriminative features; which is then compressed and hashed into a transaction. Our work assumes we trust the video at the point of reception. Our contribution is a decentralized blockchain framework of deep discriminative digital media to combat deepfakes.","PeriodicalId":286271,"journal":{"name":"2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G)","volume":"109 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Combating Deepfakes: Multi-LSTM and Blockchain as Proof of Authenticity for Digital Media\",\"authors\":\"Christopher Chun Ki Chan, Vimal Kumar, Steven Delaney, Munkhjargal Gochoo\",\"doi\":\"10.1109/AI4G50087.2020.9311067\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Malicious use of deep learning algorithms has allowed the proliferation of high realism fake digital content such as text, images, and videos, to exist on the internet as readily available and accessible consumable content. False information provided through algorithmically modified footage, images, audios, and videos (known as deepfakes), coupled with the virality of social networks, may cause major social unrest. The emergence of misinformation from fabricated digital content suggests the necessity for anti-disinformation methods such as deepfake detection algorithms or immutable metadata in order to verify the validity of digital content. Permissioned blockchain, notably Hyperledger Fabric 2.0, coupled with LSTMs for audio/video/descriptive captioning is a step towards providing a feasible tool for combating deepfake media. Original content would require the original artist attestation of untampered data. The smart contract combines a varied multiple LSTM networks into a process that allows for the tracing and tracking of a digital content's historical provenance. The result is a theoretical framework that enables proof of authenticity (PoA) for digital media using a decentralized blockchain using multiple LSTMs as a deep encoder for creating unique discriminative features; which is then compressed and hashed into a transaction. Our work assumes we trust the video at the point of reception. Our contribution is a decentralized blockchain framework of deep discriminative digital media to combat deepfakes.\",\"PeriodicalId\":286271,\"journal\":{\"name\":\"2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G)\",\"volume\":\"109 3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AI4G50087.2020.9311067\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AI4G50087.2020.9311067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

深度学习算法的恶意使用使得高真实感虚假数字内容(如文本、图像和视频)的扩散成为互联网上随时可用和可访问的消费内容。通过算法修改的镜头、图像、音频和视频(称为深度伪造)提供的虚假信息,加上社交网络的病毒式传播,可能会导致重大的社会动荡。伪造数字内容中错误信息的出现表明,为了验证数字内容的有效性,有必要采用诸如深度假检测算法或不可变元数据等反虚假信息方法。许可区块链,特别是Hyperledger Fabric 2.0,与用于音频/视频/描述性字幕的lstm相结合,是向提供打击深度虚假媒体的可行工具迈出的一步。原创内容需要原始艺术家证明未篡改的数据。智能合约将多种LSTM网络结合到一个过程中,该过程允许跟踪和跟踪数字内容的历史来源。其结果是一个理论框架,该框架使用分散的区块链为数字媒体提供真实性证明(PoA),使用多个lstm作为深度编码器来创建独特的判别特征;然后将其压缩并散列到事务中。我们的工作假设我们信任接收点的视频。我们的贡献是一个去中心化的区块链框架,用于深度鉴别数字媒体,以打击深度造假。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Combating Deepfakes: Multi-LSTM and Blockchain as Proof of Authenticity for Digital Media
Malicious use of deep learning algorithms has allowed the proliferation of high realism fake digital content such as text, images, and videos, to exist on the internet as readily available and accessible consumable content. False information provided through algorithmically modified footage, images, audios, and videos (known as deepfakes), coupled with the virality of social networks, may cause major social unrest. The emergence of misinformation from fabricated digital content suggests the necessity for anti-disinformation methods such as deepfake detection algorithms or immutable metadata in order to verify the validity of digital content. Permissioned blockchain, notably Hyperledger Fabric 2.0, coupled with LSTMs for audio/video/descriptive captioning is a step towards providing a feasible tool for combating deepfake media. Original content would require the original artist attestation of untampered data. The smart contract combines a varied multiple LSTM networks into a process that allows for the tracing and tracking of a digital content's historical provenance. The result is a theoretical framework that enables proof of authenticity (PoA) for digital media using a decentralized blockchain using multiple LSTMs as a deep encoder for creating unique discriminative features; which is then compressed and hashed into a transaction. Our work assumes we trust the video at the point of reception. Our contribution is a decentralized blockchain framework of deep discriminative digital media to combat deepfakes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信