IET Image Processing最新文献

筛选
英文 中文
An Improved Object Detection Algorithm for UAV Images Based on Orthogonal Channel Attention Mechanism and Triple Feature Encoder 基于正交信道注意机制和三特征编码器的改进无人机图像目标检测算法
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-10 DOI: 10.1049/ipr2.70061
Wenfeng Wang, Chaomin Wang, Sheng Lei, Min Xie, Binbin Gui, Fang Dong
{"title":"An Improved Object Detection Algorithm for UAV Images Based on Orthogonal Channel Attention Mechanism and Triple Feature Encoder","authors":"Wenfeng Wang,&nbsp;Chaomin Wang,&nbsp;Sheng Lei,&nbsp;Min Xie,&nbsp;Binbin Gui,&nbsp;Fang Dong","doi":"10.1049/ipr2.70061","DOIUrl":"https://doi.org/10.1049/ipr2.70061","url":null,"abstract":"<p>Object detection in Unmanned Aerial Vehicle (UAV) imagery plays an important role in many fields. However, UAV images usually exhibit characteristics different from those of natural images, such as complex scenes, dense small targets, and significant variations in target scales, which pose considerable challenges for object detection tasks. To address these issues, this paper presents a novel object detection algorithm for UAV images based on YOLOv8 (referred to as OATF-YOLO). First, an orthogonal channel attention mechanism is added to the backbone network to imporve the algorithm's ability to extract features and clear up any confusion between features in the foreground and background. Second, a triple feature encoder and a scale sequence feature fusion module are integrated into the neck network to bolster the algorithm's multi-scale feature fusion capability, thereby mitigating the impact of substantial differences in target scales. Finally, an inner factor is introduced into the loss function to further upgrade the robustness and detection accuracy of the algorithm. Experimental results on the VisDrone2019-DET dataset indicate that the proposed algorithm significantly outperforms the baseline model. On the validation set, the OATF-YOLO algorithm achieves a precision of 59.1%, a recall of 40.5%, an mAP50 of 42.5%, and an mAP50:95 of 25.8%. These values represent improvements of 3.8%, 3.0%, 4.1%, and 3.3%, respectively. Similarly, on the test set, the OATF-YOLO algorithm achieves a precision of 52.3%, a recall of 34.7%, an mAP50 of 33.4%, and an mAP50:95 of 19.1%, reflecting enhancements of 4.0%, 3.3%, 4.0%, and 2.6%, respectively. To further validate the model's robustness and scalability, experiments are conducted on the NWPU-VHR10 dataset, and OATF-YOLO also achieves excellent performance. Furthermore, compared to several classical object detection algorithms, OATF-YOLO demonstrates superior detection performance on both datasets and indicates that it is better suited for UAV image object detection scenarios.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Military Aircraft Recognition Method Based on Attention Mechanism in Remote Sensing Images 基于遥感图像注意机制的军用飞机识别方法
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-09 DOI: 10.1049/ipr2.70069
Kun Liu, Zhengfan Xu, Yang Liu, Guofeng Xu
{"title":"Military Aircraft Recognition Method Based on Attention Mechanism in Remote Sensing Images","authors":"Kun Liu,&nbsp;Zhengfan Xu,&nbsp;Yang Liu,&nbsp;Guofeng Xu","doi":"10.1049/ipr2.70069","DOIUrl":"https://doi.org/10.1049/ipr2.70069","url":null,"abstract":"&lt;p&gt;Remote sensing images play a crucial role in fields such as reconnaissance and early warning, intelligence analysis, etc. Due to factors such as climate, season, lighting, occlusion and even atmospheric scattering during remote sensing image acquisition, targets of the same model exhibit significant intra-class variability. This article applies deep learning technology to the field of military aircraft recognition in remote sensing images and proposes a You Only Look Once Version 8 Small (YOLOv8s) remote sensing image military aircraft recognition algorithm based on an attention mechanism—YOLOv8s-TDP (YOLOv8s+TripletAttention+dysample+PIoU). First, the TripletAttention attention module is used in the neck network, which captures cross-dimensional interactions and utilises a three-branch structure to calculate attention weights. This further enhances the network's ability to preserve details and restore colours in the process of image fusion. Secondly, an efficient dynamic upsampler, dysample, is used to achieve dynamic upsampling through point sampling, which improves the problems of detail loss, jagged edges, and image distortion that may occur with nearest neighbour interpolation. Finally, replacing the original model loss function with PIoU (Pixels Intersection over Union), IoU (Intersection over Union) is calculated at the pixel level to more accurately capture small overlapping areas, reduce missed detection rates, and improve accuracy. On the publicly available dataset The Remote Sensing Image Military Aircraft Target Recognition Dataset(MAR20), our proposed YOLOv8s-TDP model achieved a &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mi&gt;P&lt;/mi&gt;\u0000 &lt;mi&gt;r&lt;/mi&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;c&lt;/mi&gt;\u0000 &lt;mi&gt;i&lt;/mi&gt;\u0000 &lt;mi&gt;s&lt;/mi&gt;\u0000 &lt;mi&gt;i&lt;/mi&gt;\u0000 &lt;mi&gt;o&lt;/mi&gt;\u0000 &lt;mi&gt;n&lt;/mi&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt;${mathrm Precision} $&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; of 82.96%, &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mi&gt;R&lt;/mi&gt;\u0000 &lt;mi&gt;e&lt;/mi&gt;\u0000 &lt;mi&gt;c&lt;/mi&gt;\u0000 &lt;mi&gt;a&lt;/mi&gt;\u0000 &lt;mi&gt;l&lt;/mi&gt;\u0000 &lt;mi&gt;l&lt;/mi&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt;${mathrm Recall} $&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; of 80.71%, &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 &lt;mi&gt;m&lt;/mi&gt;\u0000 &lt;mi&gt;A&lt;/mi&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;P&lt;/mi&gt;\u0000 &lt;mn&gt;0.5&lt;/mn&gt;\u0000 &lt;/msub&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt;$mA{P}_{0.5}$&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt; of 87.11% and &lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 ","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143809709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Clustering-Based Color Reordering Method for Reversible Data Hiding in Palette Images 一种基于聚类的调色板图像可逆数据隐藏颜色重排序方法
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-09 DOI: 10.1049/ipr2.70058
Jianxuan Deng, Yi Chen, Hongxia Wang, Chun Guo, Yunhe Cui, Guowei Shen
{"title":"A Clustering-Based Color Reordering Method for Reversible Data Hiding in Palette Images","authors":"Jianxuan Deng,&nbsp;Yi Chen,&nbsp;Hongxia Wang,&nbsp;Chun Guo,&nbsp;Yunhe Cui,&nbsp;Guowei Shen","doi":"10.1049/ipr2.70058","DOIUrl":"https://doi.org/10.1049/ipr2.70058","url":null,"abstract":"<p>A recent research work pointed out that the reversible data hiding algorithms proposed for gray-scale images can be implemented on the reconstructed palette images to improve embedding capacity and visual quality by reordering the color table. However, the reordering effect has a significant impact on performance improvement. Therefore, we propose a clustering-based color reordering method for reversible data hiding in palette images to improve the reordering effect and further enhance the performance. In this method, we first design a centroid initialization method to select the initial centroids and then exploit the K-means algorithm to generate <span></span><math>\u0000 <semantics>\u0000 <mi>K</mi>\u0000 <annotation>$K$</annotation>\u0000 </semantics></math> clusters for the colors in the original color table. In the following, our proposed method, respectively, reorders the colors of these clusters by a greedy strategy and concatenates them into the reordered color table. Based on the relationship between the original and the reordered color tables, a novel index matrix can be reconstructed. Finally, state-of-the-art reversible data hiding algorithms can be implemented on the reconstructed index matrix for performance improvement. Since our proposed method improves the reordering effect, enhances the correlation of the reconstructed index matrix, and reduces the length of the encoded location map, the maximal embedding capacities and the visual quality under the fixed embedding capacities are improved. We conducted experiments on two image datasets and six standard images to verify that the performance improvement of our proposed reordering method is better than that of the state-of-the-art methods.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143809710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Dynamic Range Compression Method for Panchromatic Images Based on Detail Regions Gradient Preservation 基于细节区域梯度保持的全色图像自适应动态范围压缩方法
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-08 DOI: 10.1049/ipr2.70067
Peng Zhang, Qiang Xu, Yuwei Zhai, Tao Guo, Jiale Wang, Jinlong Xie
{"title":"Adaptive Dynamic Range Compression Method for Panchromatic Images Based on Detail Regions Gradient Preservation","authors":"Peng Zhang,&nbsp;Qiang Xu,&nbsp;Yuwei Zhai,&nbsp;Tao Guo,&nbsp;Jiale Wang,&nbsp;Jinlong Xie","doi":"10.1049/ipr2.70067","DOIUrl":"https://doi.org/10.1049/ipr2.70067","url":null,"abstract":"<p>An effective panchromatic remote sensing image dynamic range compression method is proposed to solve the issue in existing panchromatic remote sensing image grayscale conversion algorithms, which tend to cause overexposure in certain areas or overall excessive darkness. This method employs an empirical approach of detail region extraction and optimal parameter selection based on gradient to achieve dynamic range compression. First, a novel adaptive detail segmentation method based on the expansion of detail points within image blocks is introduced. Second, a detail optimisation module is established based on local detail preservation, which optimises the extraction of detail regions using gradient-based Otsu segmentation results and improved CLAHE-gradient-based Otsu segmentation results. Then, candidate adaptive dynamic range compression coefficients are determined based on the extracted detail layers, and the optimal adaptive dynamic range compression parameters are selected based on the high gradient proportion of the detail regions. Simulation experiments are conducted on multiple panchromatic remote sensing images with different scenes using the proposed method, and the effects of various dynamic range compression methods are evaluated based on multiple metrics. The results indicate that the proposed dynamic range compression method demonstrates excellent performance.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying the retinal layers from optical coherence tomography images using a 3D segmentation method 利用三维分割方法从光学相干断层扫描图像中识别视网膜层
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-07 DOI: 10.1049/ipr2.13306
Akter Hossain, Andrea Giani, Victor Chong, Sobha Sivaprasad, Tasin R. Bhuiyan, Theodore Smith, Manaswini Pradhan, Alauddin Bhuiyan
{"title":"Identifying the retinal layers from optical coherence tomography images using a 3D segmentation method","authors":"Akter Hossain,&nbsp;Andrea Giani,&nbsp;Victor Chong,&nbsp;Sobha Sivaprasad,&nbsp;Tasin R. Bhuiyan,&nbsp;Theodore Smith,&nbsp;Manaswini Pradhan,&nbsp;Alauddin Bhuiyan","doi":"10.1049/ipr2.13306","DOIUrl":"https://doi.org/10.1049/ipr2.13306","url":null,"abstract":"<p>A novel automated method for segmenting retinal layers in three-dimensional (3D) space from spectral domain optical coherence tomography (SD-OCT) images. Compared to 2D segmentation, 3D segmentation uses more data and produces findings that are more accurate and reliable. The class-specific area of interest (ROI) choice and three important reference class approximations make the suggested technique precise, effective, and reliable. In the first step, contours are detected based on gradient intensity. To choose a smaller region of interest (ROI), the second stage entails acquiring the identified boundary neighbour B scan data for the selected ROI by categorising the problem as a graph problem. The third stage involves locating edge pixels using Canny Edge Detection from nodes. In order to calculate the edge weight of a histogram, slope similarity to the reference line and node characteristics are considered. The fourth phase boundary is precisely found by Dijkstra's shortest path algorithm. The accuracy of the method was tested based on 288 B scans of 12 patients (ten normal macular degeneration (AMD) subjects and 2 age-related subjects from two different institutions). Five recent automated procedures are compared with the results to further validate the findings of the fifth phase. The outcomes demonstrate a mean original mean square error (RMSE) for each of the cut-off values, which are 2.82, 4.88, 2.03, 3.77, and 0.64 pixels, respectively. As can be seen, the suggested strategy outperforms the existing models' significantly with a return on investment of 0.26 pixels.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.13306","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143793445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Biomedical Image Pattern Identification by Deep B4-GraftingNet: Application to Pneumonia Detection 基于深度B4-GraftingNet改进生物医学图像模式识别:在肺炎检测中的应用
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-07 DOI: 10.1049/ipr2.70064
Syed Adil Hussain Shah, Syed Taimoor Hussain Shah, Abdul Muiz Fayyaz, Syed Baqir Hussain Shah, Mussarat Yasmin, Mudassar Raza, Angelo Di Terlizzi, Marco Agostino Deriu
{"title":"Improving Biomedical Image Pattern Identification by Deep B4-GraftingNet: Application to Pneumonia Detection","authors":"Syed Adil Hussain Shah,&nbsp;Syed Taimoor Hussain Shah,&nbsp;Abdul Muiz Fayyaz,&nbsp;Syed Baqir Hussain Shah,&nbsp;Mussarat Yasmin,&nbsp;Mudassar Raza,&nbsp;Angelo Di Terlizzi,&nbsp;Marco Agostino Deriu","doi":"10.1049/ipr2.70064","DOIUrl":"https://doi.org/10.1049/ipr2.70064","url":null,"abstract":"<p>VGG-16 and Inception are widely used CNN architectures for image classification, but they face challenges in target categorization. This study introduces B4-GraftingNet, a novel deep learning model that integrates VGG-16's hierarchical feature extraction with Inception's diversified receptive field strategy. The model is trained on the OCT-CXR dataset and evaluated on the NIH-CXR dataset to ensure robust generalization. Unlike conventional approaches, B4-GraftingNet incorporates binary particle swarm optimization (BPSO) for feature selection and grad-CAM for interpretability. Additionally, deep feature extraction is performed, and multiple machine learning classifiers (SVM, KNN, random forest, naïve Bayes) are evaluated to determine the optimal feature representation. The model achieves 94.01% accuracy, 94.22% sensitivity, 93.36% specificity, and 95.18% F1-score on OCT-CXR and maintains 87.34% accuracy on NIH-CXR despite not being trained on it. These results highlight the model's superior classification performance, feature adaptability, and potential for real-world deployment in both medical and general image classification tasks.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143793406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LC-YOLO: An Improved YOLOv8-Based Lane Detection Model for Enhanced Lane Intrusion Detection LC-YOLO:基于 YOLOv8 的改进型车道检测模型,用于增强型车道入侵检测
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-07 DOI: 10.1049/ipr2.70065
Abdulkareem Abdullah, Guo Ling, Mohammed Al-Soswa, Ali Desbi
{"title":"LC-YOLO: An Improved YOLOv8-Based Lane Detection Model for Enhanced Lane Intrusion Detection","authors":"Abdulkareem Abdullah,&nbsp;Guo Ling,&nbsp;Mohammed Al-Soswa,&nbsp;Ali Desbi","doi":"10.1049/ipr2.70065","DOIUrl":"https://doi.org/10.1049/ipr2.70065","url":null,"abstract":"<p>Lane intrusion detection is an essential component of road safety, as vehicles crossing into lanes without proper signalling can lead to accidents, congestion and traffic violations. In order to overcome these challenges, it has become critical for the future autonomous vehicles and ADAS to possess a precise and reliable lane detection technique which could then further monitor the lane violation in real-time. However, lane detection is still challenging due to variants in lighting conditions, obstructions and weak markers. This research paper proposes a new YOLOv8 architecture for lane detection and traffic monitoring systems. The modifications considered in the paper are the addition of the large separable kernel attention (LSKA) module and the coordinate attention (CA) mechanism, which enhance the model's feature extraction and its performance in various real-world scenarios. Furthermore, a new lane intrusion detection (LID) algorithm was created which effectively distinguishes between actual lane intrusions forbidden ones (e.g., crossing solid lane lines) and permissible ones (e.g., crossing dashed lane lines), a crucial aspect for traffic management. The model was successfully tested by transferring the data which was personally recorded on Chinese highways and that show its function in a real environment. The model was tested using a custom dataset which included videos taken on Chinese highways, demonstrating its ability to work under real-world conditions. In this way, the results show that the proposed YOLOv8 model improves the accuracy and reliability of the lane detection tasks, with the model achieving a mAP of 97.9%, which will be useful and a significant advancement in the application of AI to public safety and highlights the critical role of state-of-the-art deep learning algorithms for enhancing road safety and traffic control.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143793444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MFF-YOLOv8: Small Object Detection Based on Multi-Scale Feature Fusion for UAV Remote Sensing Images 基于多尺度特征融合的无人机遥感图像小目标检测
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-06 DOI: 10.1049/ipr2.70066
Kun Hu, Jinzheng Lu, Chaoquan Zheng, Qiang Xiang, Ling Miao
{"title":"MFF-YOLOv8: Small Object Detection Based on Multi-Scale Feature Fusion for UAV Remote Sensing Images","authors":"Kun Hu,&nbsp;Jinzheng Lu,&nbsp;Chaoquan Zheng,&nbsp;Qiang Xiang,&nbsp;Ling Miao","doi":"10.1049/ipr2.70066","DOIUrl":"https://doi.org/10.1049/ipr2.70066","url":null,"abstract":"<p>As a popular task in drone-captured scenes, object detection involves images with a large number of small objects, but current networks often suffer missed and false detections. To address this problem, we propose a YOLO algorithm MFF-YOLOv8 based on multi-scale feature fusion for small target detection in UAV aerial images. First, a high-fesolution feature fusion pyramid (HFFP) is designed, which utilizes high-resolution feature maps containing much information about small objects to guide the feature fusion module, weighting and fusing feature maps to enhance the network's ability to represent small targets. Meanwhile, a reconstruction feature selection (RFS) module is employed to remove the large amounts of noise produced by high-resolution feature maps. Second, a hybrid efficient multi-scale attention (HEMA) mechanism is designed in the backbone network to maximize the retention and extraction of feature information related to small objects while simultaneously suppressing background noise interference. Finally, an Inner-Wise IoU loss function (Inner-WIoU) is designed for joint auxiliary bounding box and dynamic focal bounding box regression, which enhances the accuracy of network regression results, thus improving the detection precision of the model for small objects. MFF-YOLOv8 was experimented on the VisDrone2019 dataset, achieving a 47.9% mAP50, 9.3% up compared with that of the baseline network YOLOv8s. Also, in order to verify the generalization of the overall network, it was evaluated on the DOTA and UAVDT datasets, and the mAP50 was improved by 3.7% and 1.8%, respectively. The results demonstrate that MFF-YOLOv8 significantly enhances detection precision for small objects in UAV aerial scenes.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143787149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Camouflaged Object Segmentation Based on Feature Fusion and Attention Mechanism 基于特征融合和注意机制的伪装目标分割研究
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-06 DOI: 10.1049/ipr2.70062
Yixuan Wang, Jingke Yan
{"title":"Research on Camouflaged Object Segmentation Based on Feature Fusion and Attention Mechanism","authors":"Yixuan Wang,&nbsp;Jingke Yan","doi":"10.1049/ipr2.70062","DOIUrl":"https://doi.org/10.1049/ipr2.70062","url":null,"abstract":"<p>Camouflaged object detection (COD) aims to detect objects that ‘blend in’ with their surroundings and the lack of a clear boundary between the target object and the background in COD tasks makes accurate detection of targets difficult. Although many innovative algorithms and methods have been developed to improve the results of camouflaged object detection, the problem of poor detection accuracy in complex scenes still exists. To improve the accuracy of camouflage target segmentation, a camouflaged object detection algorithm using contextual feature enhancement and an attention mechanism called amplify and predict network (APNet) is proposed. In this paper, context feature enhancement module (CFEM) and reverse attention prediction module (RAPM) are designed.CFEM can accept multi-level features extracted from the backbone network, and convey the features with enhancement processing to achieve the fusion of multi-level features.RAPM focuses on the edge feature information through the reverse attention mechanism to mine deeper camouflaged target information to achieve and further refine the predicted results. The proposed algorithm achieves weighted F-measure and mean absolute error (MAE) of 0.708 and 0.033 on the COD10K dataset, respectively, and the experimental results on other publicly available datasets are also significantly better than the other 14 state-of-the-art models, and achieves the optimal performance on the four objective evaluation metrics, and the proposed algorithm obtains sharper edge details on COD tasks and improves the prediction performance.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70062","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143787150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLEAR-Net: A Network for Defect Detection in Train Wheelset Treads Based on Transfer Learning and Edge Adaptive Reinforcement Attention 基于迁移学习和边缘自适应强化注意的列车轮对踏面缺陷检测网络tlearnet
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-04-04 DOI: 10.1049/ipr2.70060
Xinliang Hu, Jing He, Changfan Zhang, Xiang Cheng
{"title":"TLEAR-Net: A Network for Defect Detection in Train Wheelset Treads Based on Transfer Learning and Edge Adaptive Reinforcement Attention","authors":"Xinliang Hu,&nbsp;Jing He,&nbsp;Changfan Zhang,&nbsp;Xiang Cheng","doi":"10.1049/ipr2.70060","DOIUrl":"https://doi.org/10.1049/ipr2.70060","url":null,"abstract":"<p>As a critical load-bearing and running component of railway systems, the wheelset's operational safety fundamentally depends on precise detection and localisation of tread defects. Current deep learning-based detection methods face significant challenges in extracting discriminative edge features under small-sample conditions, leading to suboptimal defect localisation accuracy. To address these limitations, this study proposes TLEAR-Net, a novel defect detection framework integrating transfer learning with an edge-adaptive reinforcement attention mechanism. The methodology employs RetinaNet as the baseline architecture, enhanced through multi-stage domain adaptation using COCO 2017 pretraining and parameter-shared ResNet-50 backbone optimisation to bridge cross-domain feature discrepancies. An innovative edge-adaptive reinforcement (EAR) attention module is developed to selectively amplify defect boundary features through learnable gradient operators and hybrid spatial-channel attention mechanisms. Comprehensive evaluations on a proprietary data set annotated defect samples demonstrate the framework's superior performance, achieving state-of-the-art detection accuracy (89.22% mAP) while maintaining real-time processing capability (42.45 FPS).</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70060","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143770129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信