Journal of Electronic Imaging最新文献

筛选
英文 中文
Research on image segmentation effect based on denoising preprocessing 基于去噪预处理的图像分割效果研究
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033033
Lu Ronghui, Tzong-Jer Chen
{"title":"Research on image segmentation effect based on denoising preprocessing","authors":"Lu Ronghui, Tzong-Jer Chen","doi":"10.1117/1.jei.33.3.033033","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033033","url":null,"abstract":"Our study investigates the impact of denoising preprocessing on the accuracy of image segmentation. Specifically, images with Gaussian noise were segmented using the fuzzy c-means method (FCM), local binary fitting (LBF), the adaptive active contour model coupling local and global information (EVOL_LCV), and the U-Net semantic segmentation method. These methods were then quantitatively evaluated. Subsequently, various denoising techniques, such as mean, median, Gaussian, bilateral filtering, and feed-forward denoising convolutional neural network (DnCNN), were applied to the original images, and the segmentation was performed using the methods mentioned above, followed by another round of quantitative evaluations. The two quantitative evaluations revealed that the segmentation results were clearly enhanced after denoising. Specifically, the Dice similarity coefficient of the FCM segmentation improved by 4% to 44%, LBF improved by 16%, and EVOL_LCV presented limited changes. Additionally, the U-Net network trained on denoised images attained a segmentation improvement of over 5%. The accuracy of traditional segmentation and semantic segmentation of Gaussian noise images is improved effectively using DnCNN.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing images with attention generative adversarial network against adversarial attacks 利用注意力生成式对抗网络重建图像,抵御对抗性攻击
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033029
Xiong Shen, Yiqin Lu, Zhe Cheng, Zhongshu Mao, Zhang Yang, Jiancheng Qin
{"title":"Reconstructing images with attention generative adversarial network against adversarial attacks","authors":"Xiong Shen, Yiqin Lu, Zhe Cheng, Zhongshu Mao, Zhang Yang, Jiancheng Qin","doi":"10.1117/1.jei.33.3.033029","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033029","url":null,"abstract":"Deep learning is widely used in the field of computer vision, but the emergence of adversarial examples threatens its application. How to effectively detect adversarial examples and correct their labels has become a problem to be solved in this application field. Generative adversarial networks (GANs) can effectively learn the features from images. Based on GAN, this work proposes a defense method called “Reconstructing images with GAN” (RIG). The adversarial examples are generated by attack algorithms reconstructed by the trained generator of RIG to eliminate the perturbations of the adversarial examples, which disturb the models for classification, so that the models can restore their labels when classifying the reconstructed images. In addition, to improve the defensive performance of RIG, the attention mechanism (AM) is introduced to enhance the defense effect of RIG, which is called reconstructing images with attention GAN (RIAG). Experiments show that RIG and RIAG can effectively eliminate the perturbations of the adversarial examples. The results also show that RIAG has a better defensive performance than RIG in eliminating the perturbations of adversarial examples, which indicates that the introduction of AM can effectively improve the defense effect of RIG.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special Section Guest Editorial: Quality Control by Artificial Vision VII 特别栏目特约编辑:人工视觉质量控制 VII
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.031201
Igor Jovančević, Jean-José Orteu
{"title":"Special Section Guest Editorial: Quality Control by Artificial Vision VII","authors":"Igor Jovančević, Jean-José Orteu","doi":"10.1117/1.jei.33.3.031201","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.031201","url":null,"abstract":"Guest Editors Igor Jovančević and Jean-José Orteu introduce the Special Section on Quality Control by Artificial Vision VII.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFFNet: adversarial feature fusion network for super-resolution image reconstruction in remote sensing images AFFNet:用于遥感图像超分辨率图像重建的对抗特征融合网络
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033032
Qian Zhao, Qianxi Yin
{"title":"AFFNet: adversarial feature fusion network for super-resolution image reconstruction in remote sensing images","authors":"Qian Zhao, Qianxi Yin","doi":"10.1117/1.jei.33.3.033032","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033032","url":null,"abstract":"As an important source of Earth surface information, remote sensing image has the problems of rough and fuzzy image details and poor perception quality, which affect further analysis and application of geographic information. To address the above problems, we introduce the adversarial feature fusion network with an attention-based mechanism for super-resolution reconstruction of remote sensing images in this paper. First, residual structures are designed in the generator to enhance the deep feature extraction capability of remote sensing images. The residual structure is composed of the depthwise over-parameterized convolution and self-attention mechanism, which work synergistically to extract deep feature information from remote sensing images. Second, coordinate attention feature fusion module is introduced at the feature fusion stage, which can fuse shallow features and deep high-level features. Therefore, it can enhance the attention of the model to different features and better fuse inconsistent semantic features. Finally, we design the pixel-attention upsampling module in the up-sampling stage. It adaptively focuses on the most information-rich regions of a pixel and restores the image details more accurately. We conducted extensive experiments on several remote sensing image datasets, and the results showed that compared with current advanced models, our method can better restore the details in the image and achieve good subjective visual effects, which also verifies the effectiveness and superiority of the algorithm proposed in this paper.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight deep and cross residual skip connection separable CNN for plant leaf diseases classification 用于植物叶片病害分类的轻量级深度和交叉残差跳接可分离 CNN
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033035
Naresh Vedhamuru, Ramanathan Malmathanraj, Ponnusamy Palanisamy
{"title":"Lightweight deep and cross residual skip connection separable CNN for plant leaf diseases classification","authors":"Naresh Vedhamuru, Ramanathan Malmathanraj, Ponnusamy Palanisamy","doi":"10.1117/1.jei.33.3.033035","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033035","url":null,"abstract":"Crop diseases have an adverse effect on the yield, productivity, and quality of agricultural produce, which threatens the safety and security of the global feature of food supply. Addressing and controlling plant diseases through implementation of timely disease management strategies to reduce their transmission are essential for ensuring minimal crop loss, and addressing the increasing demand for food worldwide as the population continues to increase in a steadfast manner. Crop disease mitigation measures involve preventive monitoring, resulting in early detection and classification of plant diseases for effective agricultural procedure to improve crop yield. Early detection and accurate diagnosis of plant diseases enables farmers to deploy disease management strategies, such interventions are critical for better management contributing to higher crop output by curbing the spread of infection and limiting the extent of damage caused by diseases. We propose and implement a deep and cross residual skip connection separable convolutional neural network (DCRSCSCNN) for identifying and classifying leaf diseases for crops including apple, corn, cucumber, grape, potato, and guava. The significant feature of DCRSCSCNN includes residual skip connection and cross residual skip connection separable convolution block. The usage of residual skip connections assists in fixing the gradient vanishing issue faced by network architecture. The employment of separable convolution decreases the number of parameters, which leads to a model with a reduced size. So far, there has been limited exploration or investigation of leveraging separable convolution within lightweight neural networks. Extensive evaluation of several training and test sets using distinct datasets demonstrate that the proposed DCRSCSCNN outperforms other state-of-the-art approaches. The DCRSCSCNN achieved exceptional classification and identification accuracy rates of 99.89% for apple, 98.72% for corn, 100% for cucumber, 99.78% for grape, 100% for potato, 99.69% for guava1, and 99.08% for guava2 datasets.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stega4NeRF: cover selection steganography for neural radiance fields Stega4NeRF:神经辐射场的封面选择隐写术
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033031
Weina Dong, Jia Liu, Lifeng Chen, Wenquan Sun, Xiaozhong Pan
{"title":"Stega4NeRF: cover selection steganography for neural radiance fields","authors":"Weina Dong, Jia Liu, Lifeng Chen, Wenquan Sun, Xiaozhong Pan","doi":"10.1117/1.jei.33.3.033031","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033031","url":null,"abstract":"The implicit neural representation of visual data (such as images, videos, and 3D models) has become a current hotspot in computer vision research. This work proposes a cover selection steganography scheme for neural radiance fields (NeRFs). The message sender first trains an NeRF model selecting any viewpoint in 3D space as the viewpoint key Kv, to generate a unique secret viewpoint image. Subsequently, a message extractor is trained using overfitting to establish a one-to-one mapping between the secret viewpoint image and the secret message. To address the issue of securely transmitting the message extractor in traditional steganography, the message extractor is concealed within a hybrid model performing standard classification tasks. The receiver possesses a shared extractor key Ke, which is used to recover the message extractor from the hybrid model. Then the secret viewpoint image is obtained by NeRF through the viewpoint key Kv, and the secret message is extracted by inputting it into the message extractor. Experimental results demonstrate that the trained message extractor achieves high-speed steganography with a large capacity and attains a 100% message embedding. Additionally, the vast viewpoint key space of NeRF ensures the concealment of the scheme.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face antispoofing method based on single-modal and lightweight network 基于单模态和轻量级网络的人脸防伪方法
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033030
Guoxiang Tong, Xinrong Yan
{"title":"Face antispoofing method based on single-modal and lightweight network","authors":"Guoxiang Tong, Xinrong Yan","doi":"10.1117/1.jei.33.3.033030","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033030","url":null,"abstract":"In the field of face antispoofing, researchers are increasingly focusing their efforts on multimodal and feature fusion. While multimodal approaches are more effective than single-modal ones, they often come with a huge number of parameters, require significant computational resources, and pose challenges for execution on mobile devices. To address the real-time problem, we propose a fast and lightweight framework based on ShuffleNet V2. Our approach takes patch-level images as input, enhances unit performance by introducing an attention module, and addresses dataset sample imbalance issues through the focal loss function. The framework effectively tackles the real-time constraints of the model. We evaluate the performance of our model on CASIA-FASD, Replay-Attack, and MSU-MFSD datasets. The results demonstrate that our method outperforms the current state-of-the-art methods in both intratest and intertest scenarios. Furthermore, our network has only 0.84 M parameters and 0.81 GFlops, making it suitable for deployment in mobile and real-time settings. Our work can serve as a valuable reference for researchers seeking to develop single-modal face antispoofing methods suitable for mobile and real-time applications.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-light image enhancement using negative feedback pulse coupled neural network 利用负反馈脉冲耦合神经网络增强弱光图像效果
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033037
Ping Gao, Guidong Zhang, Lingling Chen, Xiaoyun Chen
{"title":"Low-light image enhancement using negative feedback pulse coupled neural network","authors":"Ping Gao, Guidong Zhang, Lingling Chen, Xiaoyun Chen","doi":"10.1117/1.jei.33.3.033037","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033037","url":null,"abstract":"Low-light image enhancement, fundamentally an ill-posed problem, seeks to simultaneously provide superior visual effects and preserve the natural appearance. Current methodologies often exhibit limitations in contrast enhancement, noise reduction, and the mitigation of halo artifacts. Negative feedback pulse coupled neural network (NFPCNN) is proposed to provide a well posed solution based on uniform distribution in contrast enhancement. The negative feedback dynamically adjusts the attenuation amplitude of neuron threshold based on recent neuronal ignited state. Neurons in the concentrated brightness area arrange smaller attenuation amplitude to enhance the local contrast, whereas neurons in the sparse area set larger attenuation amplitude. NFPCNN makes up for the negligence of pulse coupled neural network in the brightness distribution of the input image. Consistent with Weber–Fechner law, gamma correction is employed to adjust the output of NFPCNN. Although contrast enhancement can improve detail expressiveness, it might also introduce artifacts or aggravate noise. To mitigate these issues, the bilateral filter is employed to suppress halo artifacts. Brightness is used as coefficient to refine the Relativity-of-Gaussian noise suppression method. Experimental results show that the proposed method can effectively suppress noise while enhancing image contrast.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super-resolution reconstruction of images based on residual dual-path interactive fusion combined with attention 基于残差双路径交互融合与注意力相结合的图像超分辨率重建技术
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033034
Wang Hao, Peng Taile, Zhou Ying
{"title":"Super-resolution reconstruction of images based on residual dual-path interactive fusion combined with attention","authors":"Wang Hao, Peng Taile, Zhou Ying","doi":"10.1117/1.jei.33.3.033034","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033034","url":null,"abstract":"In recent years, deep learning has made significant progress in the field of single-image super-resolution (SISR) reconstruction, which has greatly improved reconstruction quality. However, most of the SISR networks focus too much on increasing the depth of the network in the process of feature extraction and neglect the connections between different levels of features as well as the full use of low-frequency feature information. To address this problem, this work proposes a network based on residual dual-path interactive fusion combined with attention (RDIFCA). Using the dual interactive fusion strategy, the network achieves the effective fusion and multiplexing of high- and low-frequency information while increasing the depth of the network, which significantly enhances the expressive ability of the network. The experimental results show that the proposed RDIFCA network exhibits certain superiority in terms of objective evaluation indexes and visual effects on the Set5, Set14, BSD100, Urban100, and Manga109 test sets.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive sparse attention module based on reciprocal nearest neighbors 基于互惠近邻的自适应稀疏注意力模块
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-06-01 DOI: 10.1117/1.jei.33.3.033038
Zhonggui Sun, Can Zhang, Mingzhu Zhang
{"title":"Adaptive sparse attention module based on reciprocal nearest neighbors","authors":"Zhonggui Sun, Can Zhang, Mingzhu Zhang","doi":"10.1117/1.jei.33.3.033038","DOIUrl":"https://doi.org/10.1117/1.jei.33.3.033038","url":null,"abstract":"The attention mechanism has become a crucial technique in deep feature representation for computer vision tasks. Using a similarity matrix, it enhances the current feature point with global context from the feature map of the network. However, the indiscriminate utilization of all information can easily introduce some irrelevant contents, inevitably hampering performance. In response to this challenge, sparsing, a common information filtering strategy, has been applied in many related studies. Regrettably, their filtering processes often lack reliability and adaptability. To address this issue, we first define an adaptive-reciprocal nearest neighbors (A-RNN) relationship. In identifying neighbors, it gains flexibility through learning adaptive thresholds. In addition, by introducing a reciprocity mechanism, the reliability of neighbors is ensured. Then, we use A-RNN to rectify the similarity matrix in the conventional attention module. In the specific implementation, to distinctly consider non-local and local information, we introduce two blocks: the non-local sparse constraint block and the local sparse constraint block. The former utilizes A-RNN to sparsify non-local information, whereas the latter uses adaptive thresholds to sparsify local information. As a result, an adaptive sparse attention (ASA) module is achieved, inheriting the advantages of flexibility and reliability from A-RNN. In the validation for the proposed ASA module, we use it to replace the attention module in NLNet and conduct experiments on semantic segmentation benchmarks including Cityscapes, ADE20K and PASCAL VOC 2012. With the same backbone (ResNet101), our ASA module outperforms the conventional attention module and its some state-of-the-art variants.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信