Image and Vision Computing最新文献

筛选
英文 中文
CF-SOLT: Real-time and accurate traffic accident detection using correlation filter-based tracking CF-SOLT:利用基于相关滤波器的跟踪技术实时准确地检测交通事故
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-14 DOI: 10.1016/j.imavis.2024.105336
Yingjie Xia , Nan Qian , Lin Guo , Zheming Cai
{"title":"CF-SOLT: Real-time and accurate traffic accident detection using correlation filter-based tracking","authors":"Yingjie Xia ,&nbsp;Nan Qian ,&nbsp;Lin Guo ,&nbsp;Zheming Cai","doi":"10.1016/j.imavis.2024.105336","DOIUrl":"10.1016/j.imavis.2024.105336","url":null,"abstract":"<div><div>Traffic accident detection using video surveillance is valuable research work in intelligent transportation systems. It is useful for responding to traffic accidents promptly that can avoid traffic jam or prevent secondary accident. In traffic accident detection, tracking occluded vehicles in real-time and accurately is one of the major sticking points for practical applications. In order to improve the tracking of occluded vehicles for traffic accident detection, this paper proposes a simple online tracking scheme with correlation filters (CF-SOLT). The CF-SOLT method utilizes a correlation filter-based auxiliary tracker to assist the main tracker. This auxiliary tracker helps prevent target ID switching caused by occlusion, enabling accurate vehicle tracking in occluded scenes. Based on the tracking results, a precise traffic accident detection algorithm is developed by integrating behavior analysis of both vehicles and pedestrians. The improved accident detection algorithm with the correlation filter-based auxiliary tracker can provide shorter response time, enabling quick identification and detection of traffic accidents. The experiments are conducted on the VisDrone2019, MOT-Traffic and Dataset of accident to evaluate the performances metrics of MOTA, IDF1, FPS, precision, response time and others. The results show that CF-SOLT improves MOTA and IDF1 by 5.3% and 6.7%, accident detection precision by 25%, and reduces response time by 56 s.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105336"},"PeriodicalIF":4.2,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransWild: Enhancing 3D interacting hands recovery in the wild with IoU-guided Transformer TransWild:利用 IoU 引导的变形器增强野外 3D 交互手的恢复能力
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-12 DOI: 10.1016/j.imavis.2024.105316
Wanru Zhu , Yichen Zhang , Ke Chen , Lihua Guo
{"title":"TransWild: Enhancing 3D interacting hands recovery in the wild with IoU-guided Transformer","authors":"Wanru Zhu ,&nbsp;Yichen Zhang ,&nbsp;Ke Chen ,&nbsp;Lihua Guo","doi":"10.1016/j.imavis.2024.105316","DOIUrl":"10.1016/j.imavis.2024.105316","url":null,"abstract":"<div><div>The recovery of 3D interacting hands meshes in the wild (ITW) is crucial for 3D full-body mesh reconstruction, especially when limited 3D annotations are available. The recent ITW interacting hands recovery method brings two hands to a shared 2D scale space and achieves effective learning of ITW datasets. However, they lack the deep exploitation of the intrinsic interaction dynamics of hands. In this work, we propose TransWild, a novel framework for 3D interactive hand mesh recovery that leverages a weight-shared Intersection-of-Union (IoU) guided Transformer for feature interaction. Based on harmonizing ITW and MoCap datasets within a unified 2D scale space, our hand feature interaction mechanism powered by an IoU-guided Transformer enables a more accurate estimation of interacting hands. This innovation stems from the observation that hand detection yields valuable IoU of two hands bounding box, therefore, an IOU-guided Transformer can significantly enrich the Transformer’s ability to decode and integrate these insights into the interactive hand recovery process. To ensure consistent training outcomes, we have developed a strategy for training with augmented ground truth bounding boxes to address inherent variability. Quantitative evaluations across two prominent benchmarks for 3D interacting hands underscore our method’s superior performance. The code will be released after acceptance.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105316"},"PeriodicalIF":4.2,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning applications in breast cancer prediction using mammography 利用乳房 X 射线摄影预测乳腺癌的机器学习应用
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-10 DOI: 10.1016/j.imavis.2024.105338
G.M. Harshvardhan , Kei Mori , Sarika Verma , Lambros Athanasiou
{"title":"Machine learning applications in breast cancer prediction using mammography","authors":"G.M. Harshvardhan ,&nbsp;Kei Mori ,&nbsp;Sarika Verma ,&nbsp;Lambros Athanasiou","doi":"10.1016/j.imavis.2024.105338","DOIUrl":"10.1016/j.imavis.2024.105338","url":null,"abstract":"<div><div>Breast cancer is the second leading cause of cancer-related deaths among women. Early detection of lumps and subsequent risk assessment significantly improves prognosis. In screening mammography, radiologist interpretation of mammograms is prone to high error rates and requires extensive manual effort. To this end, several computer-aided diagnosis methods using machine learning have been proposed for automatic detection of breast cancer in mammography. In this paper, we provide a comprehensive review and analysis of these methods and discuss practical issues associated with their reproducibility. We aim to aid the readers in choosing the appropriate method to implement and we guide them towards this purpose. Moreover, an effort is made to re-implement a sample of the presented methods in order to highlight the importance of providing technical details associated with those methods. Advancing the domain of breast cancer pathology classification using machine learning involves the availability of public databases and development of innovative methods. Although there is significant progress in both areas, more transparency in the latter would boost the domain progress.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105338"},"PeriodicalIF":4.2,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel and Spatial Enhancement Network for human parsing 用于人类解析的通道和空间增强网络
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-08 DOI: 10.1016/j.imavis.2024.105332
Kunliang Liu , Rize Jin , Yuelong Li , Jianming Wang , Wonjun Hwang
{"title":"Channel and Spatial Enhancement Network for human parsing","authors":"Kunliang Liu ,&nbsp;Rize Jin ,&nbsp;Yuelong Li ,&nbsp;Jianming Wang ,&nbsp;Wonjun Hwang","doi":"10.1016/j.imavis.2024.105332","DOIUrl":"10.1016/j.imavis.2024.105332","url":null,"abstract":"<div><div>The dominant backbones of neural networks for scene parsing consist of multiple stages, where feature maps in different stages often contain varying levels of spatial and semantic information. High-level features convey more semantics and fewer spatial details, while low-level features possess fewer semantics and more spatial details. Consequently, there are semantic-spatial gaps among features at different levels, particularly in human parsing tasks. Many existing approaches directly upsample multi-stage features and aggregate them through addition or concatenation, without addressing the semantic-spatial gaps present among these features. This inevitably leads to spatial misalignment, semantic mismatch, and ultimately misclassification in parsing, especially for human parsing that demands more semantic information and more fine details of feature maps for the reason of intricate textures, diverse clothing styles, and heavy scale variability across different human parts. In this paper, we effectively alleviate the long-standing challenge of addressing semantic-spatial gaps between features from different stages by innovatively utilizing the subtraction and addition operations to recognize the semantic and spatial differences and compensate for them. Based on these principles, we propose the Channel and Spatial Enhancement Network (CSENet) for parsing, offering a straightforward and intuitive solution for addressing semantic-spatial gaps via injecting high-semantic information to lower-stage features and vice versa, introducing fine details to higher-stage features. Extensive experiments on three dense prediction tasks have demonstrated the efficacy of our method. Specifically, our method achieves the best performance on the LIP and CIHP datasets and we also verify the generality of our method on the ADE20K dataset.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105332"},"PeriodicalIF":4.2,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-negative subspace feature representation for few-shot learning in medical imaging 用于医学成像中少镜头学习的非负子空间特征表征
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-07 DOI: 10.1016/j.imavis.2024.105334
Keqiang Fan, Xiaohao Cai, Mahesan Niranjan
{"title":"Non-negative subspace feature representation for few-shot learning in medical imaging","authors":"Keqiang Fan,&nbsp;Xiaohao Cai,&nbsp;Mahesan Niranjan","doi":"10.1016/j.imavis.2024.105334","DOIUrl":"10.1016/j.imavis.2024.105334","url":null,"abstract":"<div><div>Unlike typical visual scene recognition tasks, where massive datasets are available to train deep neural networks (DNNs), medical image diagnosis using DNNs often faces challenges due to data scarcity. In this paper, we investigate the effectiveness of data-based few-shot learning in medical imaging by exploring different data attribute representations in a low-dimensional space. We introduce different types of non-negative matrix factorization (NMF) in few-shot learning to investigate the information preserved in the subspace resulting from dimensionality reduction, which is crucial to mitigate the data scarcity problem in medical image classification. Extensive empirical studies are conducted in terms of validating the effectiveness of NMF, especially its supervised variants (e.g., discriminative NMF, and supervised and constrained NMF with sparseness), and the comparison with principal component analysis (PCA), i.e., the collaborative representation-based dimensionality reduction technique derived from eigenvectors. With 14 different datasets covering 11 distinct illness categories, thorough experimental results and comparison with related techniques demonstrate that NMF is a competitive alternative to PCA for few-shot learning in medical imaging, and the supervised NMF algorithms are more discriminative in the subspace with greater effectiveness. Furthermore, we show that the part-based representation of NMF, especially its supervised variants, is dramatically impactful in detecting lesion areas in medical imaging with limited samples.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105334"},"PeriodicalIF":4.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGB-T tracking with frequency hybrid awareness 具有混频意识的 RGB-T 跟踪
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-06 DOI: 10.1016/j.imavis.2024.105330
Lei Lei, Xianxian Li
{"title":"RGB-T tracking with frequency hybrid awareness","authors":"Lei Lei,&nbsp;Xianxian Li","doi":"10.1016/j.imavis.2024.105330","DOIUrl":"10.1016/j.imavis.2024.105330","url":null,"abstract":"<div><div>Recently, impressive progress has been made with transformer-based RGB-T trackers due to the transformer’s effectiveness in capturing low-frequency information (i.e., high-level semantic information). However, some studies have revealed that the transformer exhibits limitations in capturing high-frequency information (i.e., low-level texture and edge details), thereby restricting the tracker’s capacity to precisely match target details within the search area. To address this issue, we propose a frequency hybrid awareness modeling RGB-T tracker, abbreviated as FHAT. Specifically, FHAT combines the advantages of convolution and maximum pooling in capturing high-frequency information on the architecture of transformer. In this way, it strengthens the high-frequency features and enhances the model’s perception of detailed information. Additionally, to enhance the complementary effect between the two modalities, the tracker utilizes low-frequency information from both modalities for modality interaction, which can avoid interaction errors caused by inconsistent local details of the multimodality. Furthermore, these high-frequency information and interaction low-frequency information will then be fused, allowing the model to adaptively enhance the frequency features of the modal expression. Through extensive experiments on two mainstream RGB-T tracking benchmarks, our method demonstrates competitive performance.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105330"},"PeriodicalIF":4.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text-augmented Multi-Modality contrastive learning for unsupervised visible-infrared person re-identification 文本增强型多模态对比学习用于无监督可见红外人员再识别
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-05 DOI: 10.1016/j.imavis.2024.105310
Rui Sun , Guoxi Huang , Xuebin Wang , Yun Du , Xudong Zhang
{"title":"Text-augmented Multi-Modality contrastive learning for unsupervised visible-infrared person re-identification","authors":"Rui Sun ,&nbsp;Guoxi Huang ,&nbsp;Xuebin Wang ,&nbsp;Yun Du ,&nbsp;Xudong Zhang","doi":"10.1016/j.imavis.2024.105310","DOIUrl":"10.1016/j.imavis.2024.105310","url":null,"abstract":"<div><div>Visible-infrared person re-identification holds significant implications for intelligent security. Unsupervised methods can reduce the gap of different modalities without labels. Most previous unsupervised methods only train their models with image information, so that the model cannot obtain powerful deep semantic information. In this paper, we leverage CLIP to extract deep text information. We propose a Text–Image Alignment (TIA) module to align the image and text information and effectively bridge the gap between visible and infrared modality. We produce a Local–Global Image Match (LGIM) module to find homogeneous information. Specifically, we employ the Hungarian algorithm and Simulated Annealing (SA) algorithm to attain original information from image features while mitigating the interference of heterogeneous information. Additionally, we design a Changeable Cross-modality Alignment Loss (CCAL) to enable the model to learn modality-specific features during different training stages. Our method performs well and attains powerful robustness by targeted learning. Extensive experiments demonstrate the effectiveness of our approach, our method achieves a rank-1 accuracy that exceeds state-of-the-art approaches by approximately 10% on the RegDB.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105310"},"PeriodicalIF":4.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained semantic oriented embedding set alignment for text-based person search 基于文本的人物搜索的细粒度语义导向嵌入集对齐
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-05 DOI: 10.1016/j.imavis.2024.105309
Jiaqi Zhao , Ao Fu , Yong Zhou , Wen-liang Du , Rui Yao
{"title":"Fine-grained semantic oriented embedding set alignment for text-based person search","authors":"Jiaqi Zhao ,&nbsp;Ao Fu ,&nbsp;Yong Zhou ,&nbsp;Wen-liang Du ,&nbsp;Rui Yao","doi":"10.1016/j.imavis.2024.105309","DOIUrl":"10.1016/j.imavis.2024.105309","url":null,"abstract":"<div><div>Text-based person search aims to retrieve images of a person that are highly semantically relevant to a given textual description. The difficulty of this retrieval task is modality heterogeneity and fine-grained matching. Most existing methods only consider the alignment using global features, ignoring the fine-grained matching problem. The cross-modal attention interactions are popularly used for image patches and text markers for direct alignment. However, cross-modal attention may cause a huge overhead in the reasoning stage and cannot be applied in actual scenarios. In addition, it is unreasonable to perform patch-token alignment, since image patches and text tokens do not have complete semantic information. This paper proposes an Embedding Set Alignment (ESA) module for fine-grained alignment. The module can preserve fine-grained semantic information by merging token-level features into embedding sets. The ESA module benefits from pre-trained cross-modal large models, and it can be combined with the backbone non-intrusively and trained in an end-to-end manner. In addition, an Adaptive Semantic Margin (ASM) loss is designed to describe the alignment of embedding sets, instead of adapting a loss function with a fixed margin. Extensive experiments demonstrate that our proposed fine-grained semantic embedding set alignment method achieves state-of-the-art performance on three popular benchmark datasets, surpassing the previous best methods.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105309"},"PeriodicalIF":4.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAFENet: Semantic-Aware Feature Enhancement Network for unsupervised cross-domain road scene segmentation SAFENet:用于无监督跨域道路场景分割的语义感知特征增强网络
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-04 DOI: 10.1016/j.imavis.2024.105318
Dexin Ren , Minxian Li , Shidong Wang , Mingwu Ren , Haofeng Zhang
{"title":"SAFENet: Semantic-Aware Feature Enhancement Network for unsupervised cross-domain road scene segmentation","authors":"Dexin Ren ,&nbsp;Minxian Li ,&nbsp;Shidong Wang ,&nbsp;Mingwu Ren ,&nbsp;Haofeng Zhang","doi":"10.1016/j.imavis.2024.105318","DOIUrl":"10.1016/j.imavis.2024.105318","url":null,"abstract":"<div><div>Unsupervised cross-domain road scene segmentation has attracted substantial interest because of its capability to perform segmentation on new and unlabeled domains, thereby reducing the dependence on expensive manual annotations. This is achieved by leveraging networks trained on labeled source domains to classify images on unlabeled target domains. Conventional techniques usually use adversarial networks to align inputs from the source and the target in either of their domains. However, these approaches often fall short in effectively integrating information from both domains due to Alignment in each space usually leads to bias problems during feature learning. To overcome these limitations and enhance cross-domain interaction while mitigating overfitting to the source domain, we introduce a novel framework called Semantic-Aware Feature Enhancement Network (SAFENet) for Unsupervised Cross-domain Road Scene Segmentation. SAFENet incorporates the Semantic-Aware Enhancement (SAE) module to amplify the importance of class information in segmentation tasks and uses the semantic space as a new domain to guide the alignment of the source and target domains. Additionally, we integrate Adaptive Instance Normalization with Momentum (AdaIN-M) techniques, which convert the source domain image style to the target domain image style, thereby reducing the adverse effects of source domain overfitting on target domain segmentation performance. Moreover, SAFENet employs a Knowledge Transfer (KT) module to optimize network architecture, enhancing computational efficiency during testing while maintaining the robust inference capabilities developed during training. To further improve the segmentation performance, we further employ Curriculum Learning, a self-training mechanism that uses pseudo-labels derived from the target domain to iteratively refine the network. Comprehensive experiments on three well-known datasets, “Synthia<span><math><mo>→</mo></math></span>Cityscapes” and “GTA5<span><math><mo>→</mo></math></span>Cityscapes”, demonstrate the superior performance of our method. In-depth examinations and ablation studies verify the efficacy of each module within the proposed method.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105318"},"PeriodicalIF":4.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention enhanced machine instinctive vision with human-inspired saliency detection 利用受人类启发的显著性检测,增强机器本能视觉的注意力
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-11-04 DOI: 10.1016/j.imavis.2024.105308
Habib Khan , Muhammad Talha Usman , Imad Rida , JaKeoung Koo
{"title":"Attention enhanced machine instinctive vision with human-inspired saliency detection","authors":"Habib Khan ,&nbsp;Muhammad Talha Usman ,&nbsp;Imad Rida ,&nbsp;JaKeoung Koo","doi":"10.1016/j.imavis.2024.105308","DOIUrl":"10.1016/j.imavis.2024.105308","url":null,"abstract":"<div><div>Salient object detection (SOD) enables machines to recognize and accurately segment visually prominent regions in images. Despite recent advancements, existing approaches often lack progressive fusion of low and high-level features, effective multi-scale feature handling, and precise boundary detection. Moreover, the robustness of these models under varied lighting conditions remains a concern. To overcome these challenges, we present Attention Enhanced Machine Instinctive Vision framework for SOD. The proposed framework leverages the strategy of Multi-stage Feature Refinement with Optimal Attentions-Driven Framework (MFRNet). The multi-level features are extracted from six stages of the EfficientNet-B7 backbone. This provides effective feature fusions of low and high-level details across various scales at the later stage of the framework. We introduce the Spatial-optimized Feature Attention (SOFA) module, which refines spatial features from three initial-stage feature maps. The extracted multi-scale features from the backbone are passed from the convolution feature transformation and spatial attention mechanisms to refine the low-level information. The SOFA module concatenates and upsamples these refined features, producing a comprehensive spatial representation of various levels. Moreover, the proposed Context-Aware Channel Refinement (CACR) module integrates dilated convolutions with optimized dilation rates followed by channel attention to capture multi-scale contextual information from the mature three layers. Furthermore, our progressive feature fusion strategy combines high-level semantic information and low-level spatial details through multiple residual connections, ensuring robust feature representation and effective gradient backpropagation. To enhance robustness, we train our network with augmented data featuring low and high brightness adjustments, improving its ability to handle diverse lighting conditions. Extensive experiments on four benchmark datasets — ECSSD, HKU-IS, DUTS, and PASCAL-S — validate the proposed framework’s effectiveness, demonstrating superior performance compared to existing SOTA methods in the domain. Code, qualitative results, and trained weights will be available at the link: <span><span>https://github.com/habib1402/MFRNet-SOD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105308"},"PeriodicalIF":4.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信