DeepSecure watermarking: Hybrid Attention on Attention Net and Deep Belief Net based robust video authentication using Quaternion Curvelet Transform domain
IF 5 3区 计算机科学Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
{"title":"DeepSecure watermarking: Hybrid Attention on Attention Net and Deep Belief Net based robust video authentication using Quaternion Curvelet Transform domain","authors":"Satish D. Mali, Agilandeeswari Loganthan","doi":"10.1016/j.eij.2024.100514","DOIUrl":null,"url":null,"abstract":"<div><p>Digital videos have entered every facet of people’s lives because of the rise of live-streaming platforms and the Internet’s expansion & popularity. Additionally, there are a tonne of pirated videos on the Internet that seriously violate the rights and interests of those who own copyrights to videos, hindering the growth of the video business. As a result, trustworthy video watermarking algorithms for copyright defense have emerged in response to consumer demand. To effectively watermark videos, this article proposes a robust feature extraction approach namely Attention on Attention Net (AoA Net). AoA Net extracts the robust features from the Deep Belief Network features of the cover video frames and then generates the score map that helps to identify the suitable location for embedding. The Golden Section Fibonacci Tree Optimization is used to identify the Key frames and then apply Quaternion Curvelet Transform (QCT) on those frames to obtain the QCT coefficients over which the watermark needs to be embedded. Thus, the embedding phase involves embedding the watermark on the obtained score map. Next, an Inverse QCT and the concatenation produce the watermarked video. The resultant video is now vulnerable to adversarial attacks when it is transferred over the Adversary Layer. Consequently, the embedded video is given to the decoder and the extraction phase, which performs key frame extraction and QCT. On the obtained QCT coefficients the similar AoA Net features are used to generate the score map and thus the watermark gets extracted. The performance of the devised technique is evaluated for various intentional and unintentional attacks, and it is assessed using PSNR, MSE, SSIM, BER, and NCC. Finally, the proposed method attains the enhanced visual quality outcome with an Average PSNR and SSIM of 64.33 and 0.9895 respectively. The robustness of the proposed AoADB_QCT attains an average NCC of 0.9999, and BER of 0.001251.</p></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S111086652400077X/pdfft?md5=2bbcd2015292ce2edc29923fb90a845e&pid=1-s2.0-S111086652400077X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Egyptian Informatics Journal","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S111086652400077X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Digital videos have entered every facet of people’s lives because of the rise of live-streaming platforms and the Internet’s expansion & popularity. Additionally, there are a tonne of pirated videos on the Internet that seriously violate the rights and interests of those who own copyrights to videos, hindering the growth of the video business. As a result, trustworthy video watermarking algorithms for copyright defense have emerged in response to consumer demand. To effectively watermark videos, this article proposes a robust feature extraction approach namely Attention on Attention Net (AoA Net). AoA Net extracts the robust features from the Deep Belief Network features of the cover video frames and then generates the score map that helps to identify the suitable location for embedding. The Golden Section Fibonacci Tree Optimization is used to identify the Key frames and then apply Quaternion Curvelet Transform (QCT) on those frames to obtain the QCT coefficients over which the watermark needs to be embedded. Thus, the embedding phase involves embedding the watermark on the obtained score map. Next, an Inverse QCT and the concatenation produce the watermarked video. The resultant video is now vulnerable to adversarial attacks when it is transferred over the Adversary Layer. Consequently, the embedded video is given to the decoder and the extraction phase, which performs key frame extraction and QCT. On the obtained QCT coefficients the similar AoA Net features are used to generate the score map and thus the watermark gets extracted. The performance of the devised technique is evaluated for various intentional and unintentional attacks, and it is assessed using PSNR, MSE, SSIM, BER, and NCC. Finally, the proposed method attains the enhanced visual quality outcome with an Average PSNR and SSIM of 64.33 and 0.9895 respectively. The robustness of the proposed AoADB_QCT attains an average NCC of 0.9999, and BER of 0.001251.
期刊介绍:
The Egyptian Informatics Journal is published by the Faculty of Computers and Artificial Intelligence, Cairo University. This Journal provides a forum for the state-of-the-art research and development in the fields of computing, including computer sciences, information technologies, information systems, operations research and decision support. Innovative and not-previously-published work in subjects covered by the Journal is encouraged to be submitted, whether from academic, research or commercial sources.