Image and Vision Computing最新文献

筛选
英文 中文
IRPE: Instance-level reconstruction-based 6D pose estimator
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105340
Le Jin , Guoshun Zhou , Zherong Liu , Yuanchao Yu , Teng Zhang , Minghui Yang , Jun Zhou
{"title":"IRPE: Instance-level reconstruction-based 6D pose estimator","authors":"Le Jin ,&nbsp;Guoshun Zhou ,&nbsp;Zherong Liu ,&nbsp;Yuanchao Yu ,&nbsp;Teng Zhang ,&nbsp;Minghui Yang ,&nbsp;Jun Zhou","doi":"10.1016/j.imavis.2024.105340","DOIUrl":"10.1016/j.imavis.2024.105340","url":null,"abstract":"<div><div>The estimation of an object’s 6D pose is a fundamental task in modern commercial and industrial applications. Vision-based pose estimation has gained popularity due to its cost-effectiveness and ease of setup in the field. However, this type of estimation tends to be less robust compared to other methods due to its sensitivity to the operating environment. For instance, in robot manipulation applications, heavy occlusion and clutter are common, posing significant challenges. For safety and robustness in industrial environments, depth information is often leveraged instead of relying solely on RGB images. Nevertheless, even with depth information, 6D pose estimation in such scenarios still remains challenging. In this paper, we introduce a novel 6D pose estimation method that promotes the network’s learning of high-level object features through self-supervised learning and instance reconstruction. The feature representation of the reconstructed instance is subsequently utilized in direct 6D pose regression via a multi-task learning scheme. As a result, the proposed method can differentiate and retrieve each object instance from a scene that is heavily occluded and cluttered, thereby surpassing conventional pose estimators in such scenarios. Additionally, due to the standardized prediction of reconstructed image, our estimator exhibits robustness performance against variations in lighting conditions and color drift. This is a significant improvement over traditional methods that depend on pixel-level sparse or dense features. We demonstrate that our method achieves state-of-the-art performance (e.g., 85.4% on LM-O) on the most commonly used benchmarks with respect to the ADD(-S) metric. Lastly, we present a CLIP dataset that emulates intense occlusion scenarios of industrial environment and conduct a real-world experiment for manipulation applications to verify the effectiveness and robustness of our proposed method.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105340"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFENet: Context-aware Feature Enhancement Network for efficient few-shot object counting
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105383
Shihui Zhang , Gangzheng Zhai , Kun Chen , Houlin Wang , Shaojie Han
{"title":"CFENet: Context-aware Feature Enhancement Network for efficient few-shot object counting","authors":"Shihui Zhang ,&nbsp;Gangzheng Zhai ,&nbsp;Kun Chen ,&nbsp;Houlin Wang ,&nbsp;Shaojie Han","doi":"10.1016/j.imavis.2024.105383","DOIUrl":"10.1016/j.imavis.2024.105383","url":null,"abstract":"<div><div>Few-shot object counting (FSOC) is designed to estimate the number of objects in any category given a query image and several bounding boxes. Existing methods usually ignore shape information when extracting the appearance of exemplars from query images, resulting in reduced object localization accuracy and count estimates. Meanwhile, these methods also utilize a fixed inner product or convolution for similarity matching, which may introduce background interference and limit the matching of objects with significant intra-class differences. To address the above challenges, we propose a Context-aware Feature Enhancement Network (CFENet) for FSOC. Specifically, our network comprises three main modules: Hierarchical Perception Joint Enhancement Module (HPJEM), Learnable Similarity Matcher (LSM), and Feature Fusion Module (FFM). Firstly, HPJEM performs feature enhancement on the scale transformations of query images and the shapes of exemplars, improving the network’s ability to recognize dense objects. Secondly, LSM utilizes learnable dilated convolutions and linear layers to expand the similarity metric of a fixed inner product, obtaining similarity maps. Then convolution with a given kernel is performed on the similarity maps to get the weighted features. Finally, FFM further fuses weighted features with multi-scale features obtained by HPJEM. We conduct extensive experiments on the specialized few-shot dataset FSC-147 and the subsets Val-COCO and Test-COCO of the COCO dataset. Experimental results validate the effectiveness of our method and show competitive performance. To further verify the generalization of CFENet, we also conduct experiments on the car dataset CARPK.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105383"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge guided and Fourier attention-based Dual Interaction Network for scene text erasing
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105406
Ran Gong, Anna Zhu, Kun Liu
{"title":"Edge guided and Fourier attention-based Dual Interaction Network for scene text erasing","authors":"Ran Gong,&nbsp;Anna Zhu,&nbsp;Kun Liu","doi":"10.1016/j.imavis.2024.105406","DOIUrl":"10.1016/j.imavis.2024.105406","url":null,"abstract":"<div><div>Scene text erasing (STE) aims to remove the text regions and inpaint those regions with reasonable content in the image. It involves a potential task, i.e., scene text segmentation, in implicate or explicate ways. Most previous methods used cascaded or parallel pipelines to segment text in one branch and erase text in another branch. However, they have not fully explored the information between the two subtasks, i.e., using an interactive method to enhance each other. In this paper, we introduce a novel one-stage STE model called Dual Interaction Network (DINet), which encourages interaction between scene text segmentation and scene text erasing in an end-to-end manner. DINet adopts a shared encoder and two parallel decoders for text segmentation and erasing respectively. Specifically, the two decoders interact via an Interaction Enhancement Module (IEM) in each layer, aggregating the residual information from each other. To facilitate effective and efficient mutual enhancement between the dual tasks, we propose a novel Fourier Transform-based Attention Module (FTAM). In addition, we incorporate an Edge-Guided Module (EGM) into the text segmentation branch to better erase the text boundary regions and generate natural-looking images. Extensive experiments demonstrate that the DINet achieves state-of-the-art performances on several benchmarks. Furthermore, the ablation studies indicate the effectiveness and efficiency of our proposed modules in DINet.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105406"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
1D kernel distillation network for efficient image super-resolution
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105411
Yusong Li, Longwei Xu, Weibin Yang, Dehua Geng, Mingyuan Xu, Zhiqi Dong, Pengwei Wang
{"title":"1D kernel distillation network for efficient image super-resolution","authors":"Yusong Li,&nbsp;Longwei Xu,&nbsp;Weibin Yang,&nbsp;Dehua Geng,&nbsp;Mingyuan Xu,&nbsp;Zhiqi Dong,&nbsp;Pengwei Wang","doi":"10.1016/j.imavis.2024.105411","DOIUrl":"10.1016/j.imavis.2024.105411","url":null,"abstract":"<div><div>Recently, there have been significant strides in single-image super-resolution, especially with the integration of transformers. However, the escalating computational demands of large models pose challenges for deployment on edge devices. Therefore, in pursuit of Efficient Image Super-Resolution (EISR), achieving a better balance between task computational complexity and image fidelity becomes imperative. In this paper, we introduce the 1D kernel distillation network (OKDN). Within this network, we have devised a lightweight 1D Large Kernel (OLK) block, incorporating a more lightweight yet highly effective attention mechanism. This block significantly expands the effective receptive field, enhancing performance while mitigating computational costs. Additionally, we develop a Channel Shift Enhanced Distillation (CSED) block to improve distillation efficiency, allocating more computational resources towards increasing network depth. We utilize methods involving partial channel shifting and global feature supervision (GFS) to further augment the effective receptive field. Furthermore, we introduce learnable Gaussian perturbation convolution (LGPConv) to enhance the model’s feature extraction and performance capabilities while upholding low computational complexity. Experimental results demonstrate that our proposed approach achieves superior results with significantly lower computational complexity compared to state-of-the-art models. The code is available at <span><span>https://github.com/satvio/OKDN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105411"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fast and lightweight train image fault detection model based on convolutional neural networks
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105380
Longxin Zhang, Wenliang Zeng, Peng Zhou, Xiaojun Deng, Jiayu Wu, Hong Wen
{"title":"A fast and lightweight train image fault detection model based on convolutional neural networks","authors":"Longxin Zhang,&nbsp;Wenliang Zeng,&nbsp;Peng Zhou,&nbsp;Xiaojun Deng,&nbsp;Jiayu Wu,&nbsp;Hong Wen","doi":"10.1016/j.imavis.2024.105380","DOIUrl":"10.1016/j.imavis.2024.105380","url":null,"abstract":"<div><div>Trains play a vital role in the life of residents. Fault detection of trains is essential to ensuring their safe operation. Aiming at the problems of many parameters, slow detection speed, and low detection accuracy of the current train image fault detection model, a fast and lightweight train image fault detection model using convolutional neural network (FL-TINet) is proposed in this study. First, the joint depthwise separable convolution and divided-channel convolution strategy are applied to the feature extraction network in FL-TINet to reduce the number of parameters and computation amount in the backbone network, thereby increasing the detection speed. Second, a mixed attention mechanism is designed to make FL-TINet focus on key features. Finally, an improved discrete K-means clustering algorithm is designed to set the anchor boxes so that the anchor box can cover the object better, thereby improving the detection accuracy. Experimental results on PASCAL 2012 and train datasets show that FL-TINet can detect faults at 119 frames per second. Compared with the state-of-the-art CenterNet, RetinaNet, SSD, Faster R-CNN, MobileNet, YOLOv3, YOLOv4, YOLOv7-Tiny, YOLOv8_n and YOLOX-Tiny models, FL-TINet’s detection speed is increased by 96.37% on average, and it has higher detection accuracy and fewer parameters. The robustness test shows that FL-TINet can resist noise and illumination changes well.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105380"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EHGFormer: An efficient hypergraph-injected transformer for 3D human pose estimation
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2025.105425
Siyuan Zheng, Weiqun Cao
{"title":"EHGFormer: An efficient hypergraph-injected transformer for 3D human pose estimation","authors":"Siyuan Zheng,&nbsp;Weiqun Cao","doi":"10.1016/j.imavis.2025.105425","DOIUrl":"10.1016/j.imavis.2025.105425","url":null,"abstract":"<div><div>Recently, Transformer-based approaches have demonstrated remarkable success in 3D human pose estimation. However, these methods usually overlook crucial structural information inherent in human skeletal connections. In this paper, we propose a novel hypergraph-injected Transformer-based architecture(EHGFormer). The spatial feature extractor in our model decomposes joint relationships into first-order (joint-to-joint) and potential higher-order (joint-to-hyperedge) connections, and the attention mechanism of the spatial Transformer block, which integrates these relationships, forms the hypergraph-injected spatial attention. In addition, to address the trade-off between inference efficiency and estimation accuracy introduced by the hypergraph-injected spatial attention module, we design a multi-start grouped downsampling and restoration strategy. With this strategy, consistency in the sequence’s input and output order is maintained, while the temporal receptive field is expanded without requiring additional parameters. Furthermore, we propose a hierarchical feature distillation scheme, which applies different distillation strategies for tokens from various positions of the teacher network. This allows the narrower student network to selectively learn from the teacher network, yet improving its accuracy compared to existing feature distillation methods. Extensive experiments show that the proposed method achieves state-of-the-art performance on two benchmark datasets: Human3.6M and MPI-INF-3DHP. Code and models will be available at: <span><span>https://github.com/Brian417-cup/EHGFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105425"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to estimate 3D interactive two-hand poses with attention perception
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105398
Wai Keung Wong , Hao Liang , Hongkun Sun , Weijun Sun , Haoliang Yuan , Shuping Zhao , Lunke Fei
{"title":"Learning to estimate 3D interactive two-hand poses with attention perception","authors":"Wai Keung Wong ,&nbsp;Hao Liang ,&nbsp;Hongkun Sun ,&nbsp;Weijun Sun ,&nbsp;Haoliang Yuan ,&nbsp;Shuping Zhao ,&nbsp;Lunke Fei","doi":"10.1016/j.imavis.2024.105398","DOIUrl":"10.1016/j.imavis.2024.105398","url":null,"abstract":"<div><div>3D hand pose estimation has attracted increasing research interest due to its broad real-world applications. While encouraging performance has been achieved in single-hand cases, 3D hand-pose estimation of two interactive hands from RGB images still faces two challenging problems: severe intra-hand and inter-hand occlusion and ill-posed projection from 2D hand images to 3D hand joints. To address this, in this paper, we propose a Decoupled Dual-branch Attention Network (DDANet) for 3D interactive two-hand pose estimation. First, we extract multiscale shallow feature maps via a ResNet backbone. Then, we simultaneously learn the context-aware 2D visual and 3D depth features of two interactive hands via two separate attention branches to extensively exploit the two-hand occluded semantic information from RGB images. After that, we define learnable feature vectors to perceive the 3D spatial positions of two-hand joints by interacting them with both 2D visual and 3D depth feature maps. In this way, ill-posed hand-joint positions can be characterized in 3D spaces. Furthermore, we refine the 3D hand-joint spatial positions by capturing the underlying hand-joint connections via GCN learning for 3D two-hand pose estimation. Experimental results on five public datasets show that the proposed DDANet outperforms most state-of-the-art methods with promising generalization.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105398"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain adaptive object detection via synthetically generated intermediate domain and progressive feature alignment
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105404
Ding Gao , Qian Wang , Jian Yang , Junlong Wu
{"title":"Domain adaptive object detection via synthetically generated intermediate domain and progressive feature alignment","authors":"Ding Gao ,&nbsp;Qian Wang ,&nbsp;Jian Yang ,&nbsp;Junlong Wu","doi":"10.1016/j.imavis.2024.105404","DOIUrl":"10.1016/j.imavis.2024.105404","url":null,"abstract":"<div><div>The domain adaptive object detection problem is to accurately identify objects within varying target domains. The complexity arises from the discrepancies in weather conditions or diverse scenarios across different domains, which would significantly hinder the object detection model to generalize the learned knowledge from the source domain to the target domains. Currently, the teacher-student model with feature alignment is widely used to address this problem. However, most researchers only use the data from the source and target domains. To make the best use of the available data, we propose to generate the intermediate domain images by using a generative model and incorporate these images into the teacher-student model. The intermediate domain inherits the labels from the source domain and has a similar distribution to that of the target domain. To balance the influences of data from different domains on the model, we introduce the Progressive Feature Alignment (PFA) module. This strategy refines the feature alignment process. We align the source domain with the target domain by using a larger weight factor. For the intermediate domain, we use a lower weight factor for alignment with the target domain. The proposed method could significantly improve the performance of domain adaptive object detection as indicated in our experimental results: We achieve 47.9% mAP on Foggy Cityscape (from Cityscape), 63.2% AP on Cityscape (from Sim10k), and 50.6% AP on Cityscape (from KITTI).</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105404"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarially Enhanced Learning (AEL): Robust lightweight deep learning approach for radiology image classification against adversarial attacks
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105405
Anshu Singh, Maheshwari Prasad Singh, Amit Kumar Singh
{"title":"Adversarially Enhanced Learning (AEL): Robust lightweight deep learning approach for radiology image classification against adversarial attacks","authors":"Anshu Singh,&nbsp;Maheshwari Prasad Singh,&nbsp;Amit Kumar Singh","doi":"10.1016/j.imavis.2024.105405","DOIUrl":"10.1016/j.imavis.2024.105405","url":null,"abstract":"<div><div>Deep learning models perform well in medical image classification, particularly in radiology. However, their vulnerability to adversarial attacks raises concerns about robustness and reliability in clinical applications. To address these concerns, a novel approach for radiology image classification, referred to as Adversarially Enhanced Learning (AEL) has been proposed. This approach introduces a novel deep learning model ConvDepth-InceptNet designed to enhance the robustness and accuracy in radiology image classification through three key phases. In Phase 1, adversarial images are generated to deceive the classifier using the proposed model initially trained for classification. Phase 2 entails re-training the model with a mix of clean and adversarial images, improving its robustness by functioning as a discriminator for both types of images. Phase 3 refines adversarial images with Total Variation Minimization (TVM) denoising before classification by re-trained model. Pre-attack analysis with VGG16, ResNet-50, and XceptionNet achieved 98% accuracy with just 10,946 parameters. Post-attack analysis subjected to attacks such as Fast Gradient Sign Method, Basic Iterative Method, and Projected Gradient Descent, yields an average adversarial accuracy of 94.8%, with standard deviation of 1.6%, and an attack success rate of 3.3%. Comparative analysis with ResNet50, VGG16, and InceptionV3 indicates minimal performance drops. Furthermore, post-defense analysis shows that the adversarial images refined with TVM denoising are evaluated with re-trained model, achieving an outstanding ac- curacy of 98.83%. The combination of denoising techniques (Phase 3) and robust re-training (Phase 2) enhances robustness by providing a layered defense mechanism. The analysis validates the robustness of this approach.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105405"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rotating-YOLO: A novel YOLO model for remote sensing rotating object detection
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2025-02-01 DOI: 10.1016/j.imavis.2024.105397
Zhiguo Liu, Yuqi Chen, Yuan Gao
{"title":"Rotating-YOLO: A novel YOLO model for remote sensing rotating object detection","authors":"Zhiguo Liu,&nbsp;Yuqi Chen,&nbsp;Yuan Gao","doi":"10.1016/j.imavis.2024.105397","DOIUrl":"10.1016/j.imavis.2024.105397","url":null,"abstract":"<div><div>Satellite remote sensing images are characterized by large rotation angles and dense targets, which result in less than satisfactory detection accuracy for existing remote sensing target detectors. To tackle these challenges, this paper introduces an object detection algorithm called Rotating-YOLO, which ensures the detection accuracy of remote sensing targets while also reducing the number of model parameters. Initially, an efficient multi-branch feature fusion (EMFF) is designed to filter out redundant feature information, thereby enhancing the model’s efficiency in feature extraction and fusion. Subsequently, to address the issue of sample imbalance in remote sensing images, this paper introduces angular parameters and adopts rotated bounding boxes to decrease the interference of background noise on the detection task. Additionally, the rotated bounding boxes are transformed into Gaussian distributions, and a new loss function named GaussianLoss is designed to calculate the loss between Gaussian distributions, assisting the model in better learning the size and orientation features of targets, thus improving detection accuracy. Finally, the efficient multi-scale attention (EMA) mechanism is embedded in the model’s neck in a residual form, and low-level feature extraction layers and corresponding detection heads are added to the backbone network to enhance the detection accuracy of small targets. Experimental results demonstrate that compared to the baseline model YOLOv8, the Rotating-YOLO model has reduced the number of parameters by 33.25% and increased the mean average precision (mAP) by 1.4%.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105397"},"PeriodicalIF":4.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信