基于gpu的可变批处理CNN增强图像隐写分析方法的性能

Eslam M. Mustafa, M. Fouad, M. Elshafey
{"title":"基于gpu的可变批处理CNN增强图像隐写分析方法的性能","authors":"Eslam M. Mustafa, M. Fouad, M. Elshafey","doi":"10.1109/IDAACS.2019.8924348","DOIUrl":null,"url":null,"abstract":"Blind image steganalysis is defined as the binary classification problem of predicting whether or not an image contains an embedded message. With the development of steganography, extracting powerful features from the stego-images becomes a challenge. Recently, convolutional Neural Networks (CNNs) are presented as a promising solution for such a challenge. Unlike traditional steganalysis approaches, CNN-based steganalysis approaches have the ability of extracting features automatically from input images. With such an ability, there is no need to handcraft feature extractors like those used by traditional steganalysis approaches. Despite its long clinical success, CNN-based steganalysis approaches are time consuming. Training on those approaches may stand for days and sometimes for weeks. It is necessary to accelerate the training on CNN-based approaches to make them more usable in practice, especially for some real-time applications. The purpose of this paper is to implement an enhanced version of the improved Gaussian-Neuron CNN (IGNCNN) steganalysis approach on GPUs, and to profiteer the parallel power of GPUS. In this paper two approaches for parallelizing the CNN training process are proposed. The first is to apply the concept of data parallelism with the feature extraction module and the second is to apply model parallelism with the classification module. Besides the parallelization approaches, a variable batch size is implemented as an optimization approach. Using a big batch size in fully-connected layers leads to faster convergence to a better minima, but it may negatively affect the accuracy. The results of the proposed approach show that it outperforms the IGNCNN in terms of accuracy and performance metrics.","PeriodicalId":415006,"journal":{"name":"2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Enhancing the Performance of an Image Steganalysis Approach Using Variable Batch Size-Based CNN on GPUs\",\"authors\":\"Eslam M. Mustafa, M. Fouad, M. Elshafey\",\"doi\":\"10.1109/IDAACS.2019.8924348\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Blind image steganalysis is defined as the binary classification problem of predicting whether or not an image contains an embedded message. With the development of steganography, extracting powerful features from the stego-images becomes a challenge. Recently, convolutional Neural Networks (CNNs) are presented as a promising solution for such a challenge. Unlike traditional steganalysis approaches, CNN-based steganalysis approaches have the ability of extracting features automatically from input images. With such an ability, there is no need to handcraft feature extractors like those used by traditional steganalysis approaches. Despite its long clinical success, CNN-based steganalysis approaches are time consuming. Training on those approaches may stand for days and sometimes for weeks. It is necessary to accelerate the training on CNN-based approaches to make them more usable in practice, especially for some real-time applications. The purpose of this paper is to implement an enhanced version of the improved Gaussian-Neuron CNN (IGNCNN) steganalysis approach on GPUs, and to profiteer the parallel power of GPUS. In this paper two approaches for parallelizing the CNN training process are proposed. The first is to apply the concept of data parallelism with the feature extraction module and the second is to apply model parallelism with the classification module. Besides the parallelization approaches, a variable batch size is implemented as an optimization approach. Using a big batch size in fully-connected layers leads to faster convergence to a better minima, but it may negatively affect the accuracy. The results of the proposed approach show that it outperforms the IGNCNN in terms of accuracy and performance metrics.\",\"PeriodicalId\":415006,\"journal\":{\"name\":\"2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IDAACS.2019.8924348\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IDAACS.2019.8924348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

盲图像隐写分析被定义为预测图像是否包含嵌入信息的二值分类问题。随着隐写技术的发展,从隐写图像中提取强大的特征成为一个挑战。最近,卷积神经网络(cnn)被认为是解决这一挑战的一个很有前途的解决方案。与传统的隐写分析方法不同,基于cnn的隐写分析方法具有从输入图像中自动提取特征的能力。有了这样的能力,就不需要像传统隐写分析方法那样手工制作特征提取器。尽管其长期的临床成功,基于cnn的隐写分析方法是耗时的。这些方法的训练可能持续数天,有时长达数周。有必要加快对基于cnn的方法的训练,使其在实践中更有用,特别是在一些实时应用中。本文的目的是在gpu上实现改进的高斯神经元CNN (IGNCNN)隐写分析方法的增强版本,并充分利用gpu的并行能力。本文提出了两种并行化CNN训练过程的方法。首先是在特征提取模块中应用数据并行的概念,其次是在分类模块中应用模型并行的概念。除了并行化方法外,还实现了可变批处理大小作为优化方法。在全连接层中使用大的批处理大小可以更快地收敛到更好的最小值,但它可能会对准确性产生负面影响。结果表明,该方法在精度和性能指标方面优于IGNCNN。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing the Performance of an Image Steganalysis Approach Using Variable Batch Size-Based CNN on GPUs
Blind image steganalysis is defined as the binary classification problem of predicting whether or not an image contains an embedded message. With the development of steganography, extracting powerful features from the stego-images becomes a challenge. Recently, convolutional Neural Networks (CNNs) are presented as a promising solution for such a challenge. Unlike traditional steganalysis approaches, CNN-based steganalysis approaches have the ability of extracting features automatically from input images. With such an ability, there is no need to handcraft feature extractors like those used by traditional steganalysis approaches. Despite its long clinical success, CNN-based steganalysis approaches are time consuming. Training on those approaches may stand for days and sometimes for weeks. It is necessary to accelerate the training on CNN-based approaches to make them more usable in practice, especially for some real-time applications. The purpose of this paper is to implement an enhanced version of the improved Gaussian-Neuron CNN (IGNCNN) steganalysis approach on GPUs, and to profiteer the parallel power of GPUS. In this paper two approaches for parallelizing the CNN training process are proposed. The first is to apply the concept of data parallelism with the feature extraction module and the second is to apply model parallelism with the classification module. Besides the parallelization approaches, a variable batch size is implemented as an optimization approach. Using a big batch size in fully-connected layers leads to faster convergence to a better minima, but it may negatively affect the accuracy. The results of the proposed approach show that it outperforms the IGNCNN in terms of accuracy and performance metrics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信