ACM Transactions on Multimedia Computing Communications and Applications最新文献

筛选
英文 中文
KF-VTON: Keypoints-Driven Flow Based Virtual Try-On Network KF-VTON:关键点驱动的基于流量的虚拟试运行网络
IF 5.1 3区 计算机科学
Zizhao Wu, Siyu Liu, Peioyan Lu, Ping Yang, Yongkang Wong, Xiaoling Gu, Mohan S. Kankanhalli
{"title":"KF-VTON: Keypoints-Driven Flow Based Virtual Try-On Network","authors":"Zizhao Wu, Siyu Liu, Peioyan Lu, Ping Yang, Yongkang Wong, Xiaoling Gu, Mohan S. Kankanhalli","doi":"10.1145/3673903","DOIUrl":"https://doi.org/10.1145/3673903","url":null,"abstract":"<p>Image-based virtual try-on aims to fit a target garment to a reference person. Most existing methods are limited to solving the Garment-To-Person (G2P) try-on task that transfers a garment from a clean product image to the reference person and do not consider the Person-To-Person (P2P) try-on task that transfers a garment from a clothed person image to the reference person, which limits the practical applicability. The P2P try-on task is more challenging due to spatial discrepancies caused by different poses, body shapes, and views between the reference person and the target person. To address this issue, we propose a novel Keypoints-Driven Flow Based Virtual Try-On Network (KF-VTON) for handling both the G2P and P2P try-on tasks. Our KF-VTON has two key innovations: 1) We propose a new <i>keypoints-driven flow based deformation model</i> to warp the garment. This model establishes spatial correspondences between the target garment and reference person by combining the robustness of Thin-plate Spline (TPS) based deformation and the flexibility of appearance flow based deformation. 2) We investigate a powerful <i>Context-aware Spatially Adaptive Normalization (CSAN) generative module</i> to synthesize the final try-on image. Particularly, CSAN integrates rich contextual information with semantic parsing guidance to properly infer unobserved garment appearances. Extensive experiments demonstrate that our KF-VTON is capable of producing photo-realistic and high-fidelity try-on results for the G2P as well as P2P try-on tasks and surpasses previous state-of-the-art methods both quantitatively and qualitatively. Our code is available at https://github.com/OIUIU/KF-VTON.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified View Empirical Study for Large Pretrained Model on Cross-Domain Few-Shot Learning 大型预训练模型跨域快速学习统一视图实证研究
IF 5.1 3区 计算机科学
Linhai Zhuo, Yuqian Fu, Jingjing Chen, Yixin Cao, Yu-Gang Jiang
{"title":"Unified View Empirical Study for Large Pretrained Model on Cross-Domain Few-Shot Learning","authors":"Linhai Zhuo, Yuqian Fu, Jingjing Chen, Yixin Cao, Yu-Gang Jiang","doi":"10.1145/3673231","DOIUrl":"https://doi.org/10.1145/3673231","url":null,"abstract":"<p>The challenge of cross-domain few-shot learning (CD-FSL) stems from the substantial distribution disparities between target and source domain images, necessitating a model with robust generalization capabilities. In this work, we posit that large-scale pretrained models are pivotal in addressing the cross-domain few-shot learning task owing to their exceptional representational and generalization prowess. To our knowledge, no existing research comprehensively investigates the utility of large-scale pretrained models in the cross-domain few-shot learning context. Addressing this gap, our study presents an exhaustive empirical assessment of the CLIP model within the cross-domain few-shot learning task. We undertake a comparison spanning six dimensions: base model, transfer module, classifier, loss, data augmentation, and training schedule. Furthermore, we establish a straightforward baseline model, E-base, based on our empirical analysis, underscoring the importance of our investigation. Experimental results substantiate the efficacy of our model, yielding a mean gain of 1.2% in 5-way 5-shot evaluations on the BSCD dataset.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TA-Detector: A GNN-based Anomaly Detector via Trust Relationship TA-Detector:基于 GNN 的信任关系异常检测器
IF 5.1 3区 计算机科学
Jie Wen, Nan Jiang, Lang Li, Jie Zhou, Yanpei Li, Hualin Zhan, Guang Kou, Weihao Gu, Jiahui Zhao
{"title":"TA-Detector: A GNN-based Anomaly Detector via Trust Relationship","authors":"Jie Wen, Nan Jiang, Lang Li, Jie Zhou, Yanpei Li, Hualin Zhan, Guang Kou, Weihao Gu, Jiahui Zhao","doi":"10.1145/3672401","DOIUrl":"https://doi.org/10.1145/3672401","url":null,"abstract":"<p>With the rise of mobile Internet and AI, social media integrating short messages, images, and videos has developed rapidly. As a guarantee for the stable operation of social media, information security, especially graph anomaly detection (GAD), has become a hot issue inspired by the extensive attention of researchers. Most GAD methods are mainly limited to enhancing the homophily or considering homophily and heterophilic connections. Nevertheless, due to the deceptive nature of homophily connections among anomalies, the discriminative information of the anomalies can be eliminated. To alleviate the issue, we explore a novel method <i>TA-Detector</i> in GAD by introducing the concept of trust into the classification of connections. In particular, the proposed approach adopts a designed trust classier to distinguish trust and distrust connections with the supervision of labeled nodes. Then, we capture the latent factors related to GAD by graph neural networks (GNN), which integrate node interaction type information and node representation. Finally, to identify anomalies in the graph, we use the residual network mechanism to extract the deep semantic embedding information related to GAD. Experimental results on two real benchmark datasets verify that our proposed approach boosts the overall GAD performance in comparison to benchmark baselines.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Fusion for Talking Face Generation Utilizing Speech-related Facial Action Units 利用语音相关面部动作单元的多模态融合技术生成会说话的人脸
IF 5.1 3区 计算机科学
Zhilei Liu, Xiaoxing Liu, Sen Chen, Jiaxing Liu, Longbiao Wang, Chongke Bi
{"title":"Multimodal Fusion for Talking Face Generation Utilizing Speech-related Facial Action Units","authors":"Zhilei Liu, Xiaoxing Liu, Sen Chen, Jiaxing Liu, Longbiao Wang, Chongke Bi","doi":"10.1145/3672565","DOIUrl":"https://doi.org/10.1145/3672565","url":null,"abstract":"<p>Talking face generation is to synthesize a lip-synchronized talking face video by inputting an arbitrary face image and corresponding audio clips. The current talking face model can be divided into four parts: visual feature extraction, audio feature processing, multimodal feature fusion, and rendering module. For the visual feature extraction part, existing methods face the challenge of complex learning task with noisy features, this paper introduces an attention-based disentanglement module to disentangle the face into Audio-face and Identity-face using speech-related facial action unit (AU) information. For the multimodal feature fusion part, existing methods ignore not only the interaction and relationship of cross-modal information but also the local driving information of the mouth muscles. This study proposes a novel generative framework that incorporates a dilated non-causal temporal convolutional self-attention network as a multimodal fusion module to enhance the learning of cross-modal features. The proposed method employs both audio- and speech-related facial action units (AUs) as driving information. Speech-related AU information can facilitate more accurate mouth movements. Given the high correlation between speech and speech-related AUs, we propose an audio-to-AU module to predict speech-related AU information. Finally, we present a diffusion model for the synthesis of talking face images. We verify the effectiveness of the proposed model on the GRID and TCD-TIMIT datasets. An ablation study is also conducted to verify the contribution of each component. The results of quantitative and qualitative experiments demonstrate that our method outperforms existing methods in terms of both image quality and lip-sync accuracy. Code is available at https://mftfg-au.github.io/Multimodal_Fusion/.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compressed Point Cloud Quality Index by Combining Global Appearance and Local Details 结合全局外观和局部细节的压缩点云质量指标
IF 5.1 3区 计算机科学
Yiling Xu, Yujie Zhang, Qi Yang, Xiaozhong Xu, Shan Liu
{"title":"Compressed Point Cloud Quality Index by Combining Global Appearance and Local Details","authors":"Yiling Xu, Yujie Zhang, Qi Yang, Xiaozhong Xu, Shan Liu","doi":"10.1145/3672567","DOIUrl":"https://doi.org/10.1145/3672567","url":null,"abstract":"<p>In recent years, many standardized algorithms for point cloud compression (PCC) has been developed and achieved remarkable compression ratios. To provide guidance for rate-distortion optimization and codec evaluation, point cloud quality assessment (PCQA) has become a critical problem for PCC. Therefore, in order to achieve a more consistent correlation with human visual perception of a compressed point cloud, we propose a full-reference PCQA algorithm tailored for static point clouds in this paper, which can jointly measure geometry and attribute deformations. Specifically, we assume that the quality decision of compressed point clouds is determined by both global appearance (e.g., density, contrast, complexity) and local details (e.g., gradient, hole). Motivated by the nature of compression distortions and the properties of the human visual system, we derive perceptually effective features for the above two categories, such as content complexity, luminance/ geometry gradient, and hole probability. Through systematically incorporating measurements of variations in the local and global characteristics, we derive an effective quality index for the input compressed point clouds. Extensive experiments and analyses conducted on popular PCQA databases show the superiority of the proposed method in evaluating compression distortions. Subsequent investigations validate the efficacy of different components within the model design.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Domain Invariant Features for Unsupervised Indoor Depth Estimation Adaptation 为无监督室内深度估计自适应学习领域不变特征
IF 5.1 3区 计算机科学
Jiehua Zhang, Liang Li, Chenggang Yan, Zhan Wang, Changliang Xu, Jiyong Zhang, Chuqiao Chen
{"title":"Learning Domain Invariant Features for Unsupervised Indoor Depth Estimation Adaptation","authors":"Jiehua Zhang, Liang Li, Chenggang Yan, Zhan Wang, Changliang Xu, Jiyong Zhang, Chuqiao Chen","doi":"10.1145/3672397","DOIUrl":"https://doi.org/10.1145/3672397","url":null,"abstract":"<p>Predicting depth maps from monocular images has made an impressive performance in the past years. However, most depth estimation methods are trained with paired image-depth map data or multi-view images (e.g., stereo pair and monocular sequence), which suffer from expensive annotation costs and poor transferability. Although unsupervised domain adaptation methods are introduced to mitigate the reliance on annotated data, rare works focus on the unsupervised cross-scenario indoor monocular depth estimation. In this paper, we propose to study the generalization of depth estimation models across different indoor scenarios in an adversarial-based domain adaptation paradigm. Concretely, a domain discriminator is designed for discriminating the representation from source and target domains, while the feature extractor aims to confuse the domain discriminator by capturing domain-invariant features. Further, we reconstruct depth maps from latent representations with the supervision of labeled source data. As a result, the feature extractor learned features possess the merit of both domain-invariant and low source risk, and the depth estimator can deal with the domain shift between source and target domains. We conduct the cross-scenario and cross-dataset experiments on the ScanNet and NYU-Depth-v2 datasets to verify the effectiveness of our method and achieve impressive performance.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting Semi-Supervised Learning with Dual-Threshold Screening and Similarity Learning 利用双阈值筛选和相似性学习促进半监督学习
IF 5.1 3区 计算机科学
Zechen Liang, Yuan-Gen Wang, Wei Lu, Xiaochun Cao
{"title":"Boosting Semi-Supervised Learning with Dual-Threshold Screening and Similarity Learning","authors":"Zechen Liang, Yuan-Gen Wang, Wei Lu, Xiaochun Cao","doi":"10.1145/3672563","DOIUrl":"https://doi.org/10.1145/3672563","url":null,"abstract":"<p>How to effectively utilize unlabeled data for training is a key problem in Semi-Supervised Learning (SSL). Existing SSL methods often consider the unlabeled data whose predictions are beyond a fixed threshold (e.g., 0.95), and discard those less than 0.95. We argue that these discarded data have a large proportion, are of hard sample, and will benefit the model training if used properly. In this paper, we propose a novel method to take full advantage of the unlabeled data, termed DTS-SimL, which includes two core designs: Dual-Threshold Screening and Similarity Learning. Except for the fixed threshold, DTS-SimL extracts another class-adaptive threshold from the labeled data. Such a class-adaptive threshold can screen many unlabeled data whose predictions are lower than 0.95 but over the extracted one for model training. On the other hand, we design a new similar loss to perform similarity learning for all the highly-similar unlabeled data, which can further mine the valuable information from the unlabeled data. Finally, for more effective training of DTS-SimL, we construct an overall loss function by assigning four different losses to four different types of data. Extensive experiments are conducted on five benchmark datasets, including CIFAR-10, CIFAR-100, SVHN, Mini-ImageNet, and DomainNet-Real. Experimental results show that the proposed DTS-SimL achieves state-of-the-art classification accuracy. The code is publicly available at <i> https://github.com/GZHU-DVL/DTS-SimL.</i></p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepfake Video Detection Using Facial Feature Points and Ch-Transformer 利用面部特征点和 Ch 变换器进行深度伪造视频检测
IF 5.1 3区 计算机科学
Rui Yang, Rushi Lan, Zhenrong Deng, Xiaonan Luo, Xiyan Sun
{"title":"Deepfake Video Detection Using Facial Feature Points and Ch-Transformer","authors":"Rui Yang, Rushi Lan, Zhenrong Deng, Xiaonan Luo, Xiyan Sun","doi":"10.1145/3672566","DOIUrl":"https://doi.org/10.1145/3672566","url":null,"abstract":"<p>With the development of Metaverse technology, the avatar in Metaverse has faced serious security and privacy concerns. Analyzing facial features to distinguish between genuine and manipulated facial videos holds significant research importance for ensuring the authenticity of characters in the virtual world and for mitigating discrimination, as well as preventing malicious use of facial data. To address this issue, the Facial Feature Points and Ch-Transformer (FFP-ChT) deepfake video detection model is designed based on the clues of different facial feature points distribution in real and fake videos and different displacement distances of real and fake facial feature points between frames. The face video input is first detected by the BlazeFace model, and the face detection results are fed into the FaceMesh model to extract 468 facial feature points. Then the Lucas-Kanade (LK) optical flow method is used to track the points of the face, the face calibration algorithm is introduced to re-calibrate the facial feature points, and the jitter displacement is calculated by tracking the facial feature points between frames. Finally, the Class-head (Ch) is designed in the transformer, and the facial feature points and facial feature point displacement are jointly classified through the Ch-Transformer model. In this way, the designed Ch-Transformer classifier is able to accurately and effectively identify deepfake videos. Experiments on open datasets clearly demonstrate the effectiveness and generalization capabilities of our approach.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-grained Point Cloud Geometry Compression via Dual-model Prediction with Extended Octree 通过扩展八叉树双模型预测实现多粒度点云几何压缩
IF 5.1 3区 计算机科学
Tai Qin, Ge Li, Wei Gao, Shan Liu
{"title":"Multi-grained Point Cloud Geometry Compression via Dual-model Prediction with Extended Octree","authors":"Tai Qin, Ge Li, Wei Gao, Shan Liu","doi":"10.1145/3671001","DOIUrl":"https://doi.org/10.1145/3671001","url":null,"abstract":"<p>The state-of-the-art G-PCC (geometry-based point cloud compression) (Octree) is the fine-grained approach, which uses the octree to partition point clouds into voxels and predicts them based on neighbor occupancy in narrower spaces. However, G-PCC (Octree) is less effective at compressing dense point clouds than multi-grained approaches (such as G-PCC (Trisoup)), which exploit the continuous point distribution in nodes partitioned by the pruned octree over larger spaces. Therefore, we propose a lossy multi-grained compression with extended octree and dual-model prediction. The extended octree, where each partitioned node contains intra-block and extra-block points, is applied to address poor prediction (such as overfitting) at the node edges of the octree partition. For the points of each multi-grained node, dual-model prediction fits surfaces and projects residuals onto the surfaces, reducing projection residuals for efficient 2D compression and fitting complexity. In addition, a hybrid DWT-DCT transform for 2D projection residuals mitigates the resolution degradation of DWT and the blocking effect of DCT during high compression. Experimental results demonstrate the superior performance of our method over advanced G-PCC (Octree), achieving BD-rate gains of 55.9% and 45.3% for point-to-point (<i>D1</i>) and point-to-plane (<i>D2</i>) distortions, respectively. Our approach also outperforms G-PCC (Octree) and G-PCC (Trisoup) in subjective evaluation.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Security of Selectively Encrypted HEVC Video Bitstreams 论选择性加密 HEVC 视频比特流的安全性
IF 5.1 3区 计算机科学
Chen Chen, Lingfeng Qu, Hadi Amirpour, Xingjun Wang, Christian Timmerer, Zhihong Tian
{"title":"On the Security of Selectively Encrypted HEVC Video Bitstreams","authors":"Chen Chen, Lingfeng Qu, Hadi Amirpour, Xingjun Wang, Christian Timmerer, Zhihong Tian","doi":"10.1145/3672568","DOIUrl":"https://doi.org/10.1145/3672568","url":null,"abstract":"<p>With the growing applications of video, ensuring its security has become of utmost importance. Selective encryption (SE) has gained significant attention in the field of video content protection due to its compatibility with video codecs, favorable visual distortion, and low time complexity. However, few studies consider SE security under cryptographic attacks. To fill this gap, we analyze the security concerns of encrypted bitstreams by SE schemes and propose two known plaintext attacks (KPAs). Then the corresponding defense is presented against the KPAs. To validate the effectiveness of the KPA, it is applied to attack two existing SE schemes with superior visual degradation in HEVC videos. Firstly, the encrypted bitstreams are generated using the HEVC encoder with SE (HESE). Secondly, the video sequences are encoded using H.265/HEVC. During encoding, the selected syntax elements are recorded. Then the recorded syntax elements are imported into the HEVC decoder using decryption (HDD). By utilizing the encryption parameters and the imported data in the HDD, it becomes possible to reconstruct a significant portion of the original syntax elements before encryption. Finally, the reconstructed syntax elements are compared with the encrypted syntax elements in the HDD, allowing the design of a pseudo-key stream (PKS) through the inverse of the encryption operations. The PKS is used to decrypt the existing SE scheme, and the experimental results provide evidence that the two existing SE schemes are vulnerable to the proposed KPAs. In the case of single bitstream estimation (SBE), the average correct rate of key stream estimation exceeds 93%. Moreover, with multi-bitstream complementation (MBC), the average estimation accuracy can be further improved to 99%.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信