IET Image Process.最新文献

筛选
英文 中文
Optimized deep learning model for mango grading: Hybridizing lion plus firefly algorithm 芒果分级的优化深度学习模型:狮子+萤火虫杂交算法
IET Image Process. Pub Date : 2021-03-04 DOI: 10.1049/IPR2.12163
M. Tripathi, Dhananjay D. Maktedar
{"title":"Optimized deep learning model for mango grading: Hybridizing lion plus firefly algorithm","authors":"M. Tripathi, Dhananjay D. Maktedar","doi":"10.1049/IPR2.12163","DOIUrl":"https://doi.org/10.1049/IPR2.12163","url":null,"abstract":"This paper intends to present an automated mango grading system under four stages (1) pre-processing, (2) feature extraction, (3) optimal feature selection and (4) classification. Initially, the input image is subjected to the pre-processing phase, where the reading, sizing, noise removal and segmentation process happens. Subsequently, the features are extracted from the pre-processed image. To make the system more effective, from the extracted features, the optimal features are selected using a new hybrid optimization algorithm termed the lion assisted firefly algorithm (LA-FF), which is the combination of LA and FF, respectively. Then, the optimal features are given for the classification process, where the optimized deep convolutional neural network (CNN) is deployed. As a major contribution, the configuration of CNN is fine-tuned via selecting the optimal count of convolutional layers. This obviously enhances the classification accuracy in grading system. For fine-tuning the convolutional layers in the deep CNN, the LA-FF algorithm is used so that the classifier is optimized. The grading is evaluated on the basis of healthydiseased, ripe-unripe and bigmediumvery big cases with respect to type I and type II measures and the performance of the proposed grading model is compared over the other state-of-the-art models.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79318502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Generative adversarial network for low-light image enhancement 弱光图像增强的生成对抗网络
IET Image Process. Pub Date : 2021-01-20 DOI: 10.1049/IPR2.12124
Fei Li, Jiangbin Zheng, Yuan-fang Zhang
{"title":"Generative adversarial network for low-light image enhancement","authors":"Fei Li, Jiangbin Zheng, Yuan-fang Zhang","doi":"10.1049/IPR2.12124","DOIUrl":"https://doi.org/10.1049/IPR2.12124","url":null,"abstract":"Low-light image enhancement is rapidly gaining research attention due to the increasing demands of extreme visual tasks in various applications. Although numerous methods exist to enhance image qualities in low light, it is still undetermined how to trade-off between the human observation and computer vision processing. In this work, an effective generative adversarial network structure is proposed comprising both the densely residual block (DRB) and the enhancing block (EB) for low-light image enhancement. Specifically, the proposed end-to-end image enhancement method, consisting of a generator and a discriminator, is trained using the hyper loss function. The DRB adopts the residual and dense skip connections to connect and enhance the features extracted from different depths in the network while the EB receives unique multi-scale features to ensure feature diversity. Additionally, increasing the feature sizes allows the discriminator to further distinguish between fake and real images from the patch levels. The merits of the loss function are also studied to recover both contextual and local details. Extensive experimental results show that our method is capable of dealing with extremely low-light scenes and the realistic feature generator outperforms several state-of-the-art methods in a number of qualitative and quantitative evaluation tests.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76151305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Localised edge-region-based active contour for medical image segmentation 基于边缘区域的局部活动轮廓医学图像分割
IET Image Process. Pub Date : 2021-01-20 DOI: 10.1049/IPR2.12126
Huaxiang Liu, Jiangxiong Fang, Zijian Zhang, Yongzheng Lin
{"title":"Localised edge-region-based active contour for medical image segmentation","authors":"Huaxiang Liu, Jiangxiong Fang, Zijian Zhang, Yongzheng Lin","doi":"10.1049/IPR2.12126","DOIUrl":"https://doi.org/10.1049/IPR2.12126","url":null,"abstract":"","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85762177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An efficient framework for deep learning-based light-defect image enhancement 基于深度学习的光缺陷图像增强的有效框架
IET Image Process. Pub Date : 2021-01-13 DOI: 10.1049/IPR2.12125
Chengxu Ma, Daihui Li, Shangyou Zeng, Junbo Zhao, Hongyang Chen
{"title":"An efficient framework for deep learning-based light-defect image enhancement","authors":"Chengxu Ma, Daihui Li, Shangyou Zeng, Junbo Zhao, Hongyang Chen","doi":"10.1049/IPR2.12125","DOIUrl":"https://doi.org/10.1049/IPR2.12125","url":null,"abstract":"The enhancement of light-defect images such as extremely low-light, low-light and dim-light has always been a research hotspot. Most of the existing methods are excellent in specific illuminations, and there is much room for improvement in processing light-defect images with different illuminations. Therefore, this study proposes an efficient framework based on deep learning to enhance various light-defect images. The proposed framework estimates the reflectance component and illumination component. Next, we propose a generator guided by an attention mechanism in the reflectance part to repair the light-defect in the dark. In addition, we design a colour loss function for the problem of colour distortion in the enhanced images. Finally, the illumination map of the light-defect images is adjusted adaptively. Extensive experiments are conducted to demonstrate that our method can not only deal with the images with different illuminations but also enhance the images with clearer details and richer colours. At the same time, we prove its superiority by compar-ing it with state-of-the-art methods under both visual quality comparison and quantitative comparison of various datasets and real-world images.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85883502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Level set method with Retinex-corrected saliency embedded for image segmentation 嵌入视网膜校正显著性的水平集图像分割方法
IET Image Process. Pub Date : 2021-01-12 DOI: 10.1049/IPR2.12123
Dongmei Liu, F. Chang, Huaxiang Zhang, Li Liu
{"title":"Level set method with Retinex-corrected saliency embedded for image segmentation","authors":"Dongmei Liu, F. Chang, Huaxiang Zhang, Li Liu","doi":"10.1049/IPR2.12123","DOIUrl":"https://doi.org/10.1049/IPR2.12123","url":null,"abstract":"It can be a very challenging task when using level set method segmenting natural images with high intensity inhomogeneity and complex background scenes. A new synthesis level set method for robust image segmentation based on the combination of Retinex-corrected saliency region information and edge information is proposed in this work. First, the Retinex theory is introduced to correct the saliency information extraction. Second, the Retinex-corrected saliency information is embedded into the level set method due to its advantageous quality which makes a foreground object stand out relative to the backgrounds. Combined with the edge information, the boundary of segmentation will be more precise and smooth. Experiments indicate that the proposed segmentation algorithm is efficient, fast, reliable, and robust.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76828598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Image stitching method by multi-feature constrained alignment and colour adjustment 基于多特征约束对齐和颜色调整的图像拼接方法
IET Image Process. Pub Date : 2021-01-07 DOI: 10.1049/IPR2.12120
Xingsheng Yuan, Yongbin Zheng, Wei Zhao, Jiongming Su, Jianzhai Wu
{"title":"Image stitching method by multi-feature constrained alignment and colour adjustment","authors":"Xingsheng Yuan, Yongbin Zheng, Wei Zhao, Jiongming Su, Jianzhai Wu","doi":"10.1049/IPR2.12120","DOIUrl":"https://doi.org/10.1049/IPR2.12120","url":null,"abstract":"","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82518376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Unsupervised automated retinal vessel segmentation based on Radon line detector and morphological reconstruction 基于Radon线检测器和形态重建的无监督自动视网膜血管分割
IET Image Process. Pub Date : 2021-01-05 DOI: 10.1049/IPR2.12119
M. Tavakoli, A. Mehdizadeh, Reza Pourreza-Shahri, J. Dehmeshki
{"title":"Unsupervised automated retinal vessel segmentation based on Radon line detector and morphological reconstruction","authors":"M. Tavakoli, A. Mehdizadeh, Reza Pourreza-Shahri, J. Dehmeshki","doi":"10.1049/IPR2.12119","DOIUrl":"https://doi.org/10.1049/IPR2.12119","url":null,"abstract":"Retinal blood vessel segmentation and analysis is critical for the computer-aided diagnosis of different diseases such as diabetic retinopathy. This study presents an automated unsupervised method for segmenting the retinal vasculature based on hybrid methods. The algorithm initially applies a preprocessing step using morphological operators to enhance the vessel tree structure against a non-uniform image background. The main processing applies the Radon transform to overlapping windows, followed by vessel validation, vessel refinement and vessel reconstruction to achieve the final segmentation. The method was tested on three publicly available datasets and a local database comprising a total of 188 images. Segmentation performance was evaluated using three measures: accuracy, receiver operating characteristic (ROC) analysis, and the structural similarity index. ROC analysis resulted in area under curve values of 97.39%, 97.01%, and 97.12%, for the DRIVE, STARE, and CHASE-DB1, respectively. Also, the results of accuracy were 0.9688, 0.9646, and 0.9475 for the same datasets. Finally, the average values of structural similarity index were computed for all four datasets, with average values of 0.9650 (DRIVE), 0.9641 (STARE), and 0.9625 (CHASE-DB1). These results compare with the best published results to date, exceeding their performance for several of the datasets; similar performance is found using accuracy.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91246397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A hybrid feature descriptor with Jaya optimised least squares SVM for facial expression recognition 基于Jaya优化的最小二乘支持向量机混合特征描述符用于人脸表情识别
IET Image Process. Pub Date : 2021-01-05 DOI: 10.1049/IPR2.12118
Nikunja Bihari Kar, D. Nayak, Korra Sathya Babu, Yudong Zhang
{"title":"A hybrid feature descriptor with Jaya optimised least squares SVM for facial expression recognition","authors":"Nikunja Bihari Kar, D. Nayak, Korra Sathya Babu, Yudong Zhang","doi":"10.1049/IPR2.12118","DOIUrl":"https://doi.org/10.1049/IPR2.12118","url":null,"abstract":"Facial expression recognition has been a long-standing problem in the field of computer vision. This paper proposes a new simple scheme for effective recognition of facial expressions based on a hybrid feature descriptor and an improved classifier. Inspired by the success of stationary wavelet transform in many computer vision tasks, stationary wavelet transform is first employed on the pre-processed face image. The pyramid of histograms of orientation gradient features is then computed from the low-frequency stationary wavelet transform coefficients to capture more prominent details from facial images. The key idea of this hybrid feature descriptor is to exploit both spatial and frequency domain features which at the same time are robust against illumination and noise. The relevant features are subsequently determined using linear discriminant analysis. A new least squares support vector machine parameter tuning strategy is proposed using a contemporary optimisation technique called Jaya optimisation for classification of facial expressions. Experimental evaluations are performed on Japanese female facial expression and the Extended Cohn–Kanade (CK + ) datasets, and the results based on 5-fold stratified cross-validation test confirm the superiority of the proposed method over state-of-the-art approaches.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82036079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An exclusive-disjunction-based detection of neovascularisation using multi-scale CNN 基于多尺度CNN的排他分离的新生血管检测
IET Image Process. Pub Date : 2021-01-03 DOI: 10.1049/ipr2.12122
Geetha Pavani Pappu, B. Biswal, M. Sairam, P. Biswal
{"title":"An exclusive-disjunction-based detection of neovascularisation using multi-scale CNN","authors":"Geetha Pavani Pappu, B. Biswal, M. Sairam, P. Biswal","doi":"10.1049/ipr2.12122","DOIUrl":"https://doi.org/10.1049/ipr2.12122","url":null,"abstract":"In this article, an exclusive-disjunction-based detection of neovascularisation (NV), which is the formation of new blood vessels on the retinal surfaces, is presented. These vessels, being thin and fragile, get ruptured easily leading to permanent blindness. The proposed algorithm consists of two stages. In the first stage, the retinal images are classified into non-NV and NV using multi-scale convolutional neural network, while in the second stage, 13 relevant features are extracted from the vascular map of NV images to achieve the pixel locations of new blood vessels using a directional matched filter along with the Difference of Laplacian of Gaussian operator followed by an exclusive disjunction function with adaptive thresholding of the vascular map. At the same time, the pixel locations of optic disc (OD) are detected using intensity distribution and variations on the retinal images. Finally, the pixel locations of both new blood vessels and OD are compared for classification. If the pixel locations of new blood vessels fall inside the OD, they are labelled as NV on OD, else they are labelled as NV elsewhere. The proposed algorithm has achieved an accuracy of 99.5%, specificity of 97.5%, sensitivity of 98.9%, and area under the curve of 94.2% when tested on 155 non-NV and 115 NV images.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88475184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Error feedback denoising network 误差反馈去噪网络
IET Image Process. Pub Date : 2021-01-02 DOI: 10.1049/ipr2.12121
R. Hou, Fang Li
{"title":"Error feedback denoising network","authors":"R. Hou, Fang Li","doi":"10.1049/ipr2.12121","DOIUrl":"https://doi.org/10.1049/ipr2.12121","url":null,"abstract":"Recently, deep convolutional neural networks have been successfully used for image denoising due to their favourable performance. This paper examines the error feedback mechanism to image denoising and propose an error feedback denoising network. Specif-ically, we use the down-and-up projection sequence to estimate the noise feature. By the residual connection, the clean structures are removed from the noise features. The essential difference between the proposed network and other existing feedback networks is the projection sequence. Our error feedback projection sequence is down-and-up, which is more suitable for image denoising than the existing up-and-down order. Moreover, we design a compression block to improve the expression ability of the general 1 × 1 convolutional compression layer. The advantage of our well-designed down-and-up block is that the network parameters are fewer than other feedback networks and the receptive field is enlarged. We have implemented our error feedback denoising network on denoising and JPEG image deblocking. Extensive experiments verify the effectiveness of our down-and-up block and demonstrate that our error feedback denoising network is comparable with the state-of-the-art. The code will be open source. The source codes for reproducing the results can be found at: https://github.com/Houruizhi/EFDN.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87792341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信