Neural Networks最新文献

筛选
英文 中文
L3Net: Localized and Layered Reparameterization for incremental learning
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-24 DOI: 10.1016/j.neunet.2025.107420
Xuandi Luo , Huaidong Zhang , Yi Xie , Hongrui Zhang , Xuemiao Xu , Shengfeng He
{"title":"L3Net: Localized and Layered Reparameterization for incremental learning","authors":"Xuandi Luo ,&nbsp;Huaidong Zhang ,&nbsp;Yi Xie ,&nbsp;Hongrui Zhang ,&nbsp;Xuemiao Xu ,&nbsp;Shengfeng He","doi":"10.1016/j.neunet.2025.107420","DOIUrl":"10.1016/j.neunet.2025.107420","url":null,"abstract":"<div><div>Model-based class incremental learning (CIL) methods aim to address the challenge of catastrophic forgetting by retaining certain parameters and expanding the model architecture. However, retaining too many parameters can lead to an overly complex model, increasing inference overhead. Additionally, compressing these parameters to reduce the model size can result in performance degradation. To tackle these challenges, we propose a novel three-stage CIL framework called <strong>L</strong>ocalized and <strong>L</strong>ayered Reparameterization for Incremental <strong>L</strong>earning (<strong>L<sup>3</sup>Net</strong>). The rationale behind our approach is to balance model complexity and performance by selectively expanding and optimizing critical components. Specifically, the framework introduces a Localized Dual-path Expansion structure, which allows the model to learn simultaneously from both old and new features by integrating a fusion selector after each convolutional layer. To further minimize potential conflicts between old and new features, we implement the Feature Selectors Gradient Resetting method, which sparsifies the fusion selectors and reduces the influence of redundant old features. Additionally, to address classification bias resulting from class imbalance, we design the Decoupled Balanced Distillation technique and apply Logit Adjustment to more effectively retain knowledge from the rehearsal set. Extensive experiments demonstrate that our <strong>L<sup>3</sup>Net</strong> framework outperforms state-of-the-art methods on widely used benchmarks, including CIFAR-100 and ImageNet-100/1000.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107420"},"PeriodicalIF":6.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An information-theoretic approach for heterogeneous differentiable causal discovery 异构可变因果关系发现的信息论方法
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-24 DOI: 10.1016/j.neunet.2025.107417
Wanqi Zhou , Shuanghao Bai , Yuqing Xie , Yicong He , Qibin Zhao , Badong Chen
{"title":"An information-theoretic approach for heterogeneous differentiable causal discovery","authors":"Wanqi Zhou ,&nbsp;Shuanghao Bai ,&nbsp;Yuqing Xie ,&nbsp;Yicong He ,&nbsp;Qibin Zhao ,&nbsp;Badong Chen","doi":"10.1016/j.neunet.2025.107417","DOIUrl":"10.1016/j.neunet.2025.107417","url":null,"abstract":"<div><div>With the advancement of deep learning, a variety of differential causal discovery methods have emerged, inevitably attracting more attention for their excellent scalability and interpretability. However, these methods often struggle with complex heterogeneous datasets that exhibit environmental diversity and are characterized by shifts in noise distribution. To this end, we introduce a novel information-theoretic approach designed to enhance the robustness of differential causal discovery methods. Specifically, we integrate Minimum Error Entropy (MEE) as an adaptive error regulator into the structure learning framework. MEE effectively reduces error variability across diverse samples, enabling our model to adapt dynamically to varying levels of complexity and noise. This adjustment significantly improves the precision and stability of the model. Extensive experiments on both synthetic and real-world datasets have demonstrated significant performance enhancements over existing methods, affirming the effectiveness of our approach. The code is available at <span><span>https://github.com/ElleZWQ/MHCD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107417"},"PeriodicalIF":6.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143735174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics Informed Neural Networks for Electrical Impedance Tomography
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-23 DOI: 10.1016/j.neunet.2025.107410
Danny Smyl , Tyler N. Tallman , Laura Homa , Chenoa Flournoy , Sarah J. Hamilton , John Wertz
{"title":"Physics Informed Neural Networks for Electrical Impedance Tomography","authors":"Danny Smyl ,&nbsp;Tyler N. Tallman ,&nbsp;Laura Homa ,&nbsp;Chenoa Flournoy ,&nbsp;Sarah J. Hamilton ,&nbsp;John Wertz","doi":"10.1016/j.neunet.2025.107410","DOIUrl":"10.1016/j.neunet.2025.107410","url":null,"abstract":"<div><div>Electrical Impedance Tomography (EIT) is an imaging modality used to reconstruct the internal conductivity distribution of a domain via boundary voltage measurements. In this paper, we present a novel EIT approach for integrated sensing of composite structures utilizing Physics Informed Neural Networks (PINNs). Unlike traditional data-driven only models, PINNs incorporate underlying physical principles governing EIT directly into the learning process, enabling precise and rapid reconstructions. We demonstrate the effectiveness of PINNs with a variety of physical constraints for integrated sensing. The proposed approach has potential to enhance material characterization and condition monitoring, offering a robust alternative to classical EIT approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107410"},"PeriodicalIF":6.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143737798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BSA-Seg: A Bi-level sparse attention network combining narrow band loss for multi-target medical image segmentation
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-23 DOI: 10.1016/j.neunet.2025.107431
Zhiyong Zhou , Zhechen Zhou , Xusheng Qian , Jisu Hu , Bo Peng , Chen Geng , Bin Dai , He Huang , Wenbin Zhang , Yakang Dai
{"title":"BSA-Seg: A Bi-level sparse attention network combining narrow band loss for multi-target medical image segmentation","authors":"Zhiyong Zhou ,&nbsp;Zhechen Zhou ,&nbsp;Xusheng Qian ,&nbsp;Jisu Hu ,&nbsp;Bo Peng ,&nbsp;Chen Geng ,&nbsp;Bin Dai ,&nbsp;He Huang ,&nbsp;Wenbin Zhang ,&nbsp;Yakang Dai","doi":"10.1016/j.neunet.2025.107431","DOIUrl":"10.1016/j.neunet.2025.107431","url":null,"abstract":"<div><div>Segmentation of multiple targets of varying sizes within medical images is of significant importance for the diagnosis of disease and pathological research. Transformer-based methods are emerging in the medical image segmentation, leveraging the powerful yet computationally intensive self-attention mechanism. A variety of attention mechanisms have been proposed to reduce computation at the cost of accuracy loss, utilizing handcrafted patterns within local or artificially defined receptive fields. Furthermore, the common region-based loss functions are insufficient for guiding the transformer to focus on tissue regions, resulting in their unsuitability for the segmentation of tissues with intricate boundaries. This paper presents the development of a bi-level sparse attention network and a narrow band (NB) loss function for the accurate and efficient multi-target segmentation of medical images. In particular, we introduce a bi-level sparse attention module (BSAM) and formulate a segmentation network based on this module. The BSAM consists of coarse-grained patch-level attention and fine-grained pixel-level attention, which captures fine-grained contextual features in adaptive receptive fields learned by patch-level attention. This results in enhanced segmentation accuracy while simultaneously reducing computational complexity. The proposed narrow-band (NB) loss function constructs a target region in close proximity to the tissue boundary. The network is thus guided to perform boundary-aware segmentation, thereby simultaneously alleviating the issues of over-segmentation and under-segmentation. A series of comprehensive experiments on whole brains, brain tumors and abdominal organs, demonstrate that our method outperforms other state-of-the-art segmentation methods. Furthermore, the BSAM and NB loss can be applied flexibly to a variety of network frameworks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107431"},"PeriodicalIF":6.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight All-MLP time–frequency anomaly detection for IIoT time series
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-22 DOI: 10.1016/j.neunet.2025.107400
Lei Chen , Xinzhe Cao , Tingqin He , Yepeng Xu , Xuxin Liu , Bowen hu
{"title":"A lightweight All-MLP time–frequency anomaly detection for IIoT time series","authors":"Lei Chen ,&nbsp;Xinzhe Cao ,&nbsp;Tingqin He ,&nbsp;Yepeng Xu ,&nbsp;Xuxin Liu ,&nbsp;Bowen hu","doi":"10.1016/j.neunet.2025.107400","DOIUrl":"10.1016/j.neunet.2025.107400","url":null,"abstract":"<div><div>Anomaly detection in the Industrial Internet of Things (IIoT) aims at identifying abnormal sensor signals to ensure industrial production safety. However, most existing models only focus on high accuracy by building a bulky neural network with deep structures and huge parameters. In this case, these models usually exhibit poor timeliness and high resource consumption, which makes these models unsuitable for resource-limited edge industrial scenarios. To solve this problem, a lightweight All-MLP time–frequency anomaly detection model is proposed for IIoT time series, namely LTFAD. <em>Firstly</em>, unlike traditional deep and bulky solutions, a shallow and lightweight All-MLP architecture is designed to achieve high timeliness and low resource consumption. <em>Secondly</em>, based on the lightweight architecture, a dual-branch network is constructed to improve model accuracy by simultaneously learning “global to local” and “local to global” reconstruction. <em>Finally</em>, time–frequency joint learning is employed in each reconstruction branch to further enhance accuracy. To the best of our knowledge, this is the first work to develop a time–frequency anomaly detection model based only on the shallow All-MLP architecture. Extensive experiments demonstrate that LTFAD can quickly and accurately identify anomalies on resource-limited edge devices, such as the Raspberry Pi 4b and Jetson Xavier NX. The source code for LTFAD is available at <span><span>https://github.com/infogroup502/LTFAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107400"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SympGNNs: Symplectic Graph Neural Networks for identifying high-dimensional Hamiltonian systems and node classification
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-22 DOI: 10.1016/j.neunet.2025.107397
Alan John Varghese , Zhen Zhang , George Em Karniadakis
{"title":"SympGNNs: Symplectic Graph Neural Networks for identifying high-dimensional Hamiltonian systems and node classification","authors":"Alan John Varghese ,&nbsp;Zhen Zhang ,&nbsp;George Em Karniadakis","doi":"10.1016/j.neunet.2025.107397","DOIUrl":"10.1016/j.neunet.2025.107397","url":null,"abstract":"<div><div>Existing neural network models to learn Hamiltonian systems, such as SympNets, although accurate in low-dimensions, struggle to learn the correct dynamics for high-dimensional many-body systems. Herein, we introduce Symplectic Graph Neural Networks (SympGNNs) that can effectively handle system identification in high-dimensional Hamiltonian systems, as well as node classification. SympGNNs combine symplectic maps with permutation equivariance, a property of graph neural networks. Specifically, we propose two variants of SympGNNs: (i) G-SympGNN and (ii) LA-SympGNN, arising from different parameterizations of the kinetic and potential energy. We demonstrate the capabilities of SympGNN on two physical examples: a 40-particle coupled Harmonic oscillator, and a 2000-particle molecular dynamics simulation in a two-dimensional Lennard-Jones potential. Furthermore, we demonstrate the performance of SympGNN in the node classification task, achieving accuracy comparable to the state-of-the-art. We also empirically show that SympGNN can overcome the oversmoothing and heterophily problems, two key challenges in the field of graph neural networks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107397"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting low-homophily for graph-based fraud detection
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-22 DOI: 10.1016/j.neunet.2025.107407
Tairan Huang , Qiutong Li , Cong Xu , Jianliang Gao , Zhao Li , Shichao Zhang
{"title":"Revisiting low-homophily for graph-based fraud detection","authors":"Tairan Huang ,&nbsp;Qiutong Li ,&nbsp;Cong Xu ,&nbsp;Jianliang Gao ,&nbsp;Zhao Li ,&nbsp;Shichao Zhang","doi":"10.1016/j.neunet.2025.107407","DOIUrl":"10.1016/j.neunet.2025.107407","url":null,"abstract":"<div><div>The openness of Internet stimulates a large number of fraud behaviors which have become a huge threat. Graph-based fraud detectors have attracted extensive interest since the abundant structure information of graph data has proved effective. Conventional Graph Neural Network (GNN) approaches reveal fraudsters based on the homophily assumption. But fraudsters typically generate heterophilous connections and label-imbalanced neighborhood. Such behaviors deteriorate the performance of GNNs in fraud detection tasks due to the low homophily in graphs. Though some recent works have noticed the challenges, they either treat the heterophilous connections as homophilous ones or tend to reduce heterophily, which roughly ignore the benefits from heterophily. In this work, an integrated two-strategy framework HeteGAD is proposed to balance both homophily and heterophily information from neighbors. The key lies in explicitly shrinking intra-class distance and increasing inter-class segregation. Specifically, the Heterophily-aware Aggregation Strategy tease out the feature disparity on heterophilous neighbors and augment the disparity between representations with different labels. And the Homophily-aware Aggregation Strategy are devised to capture the homophilous information in global text and augment the representation similarity with the same label. Finally, two corresponding inter-relational attention mechanisms are incorporated to refine the procedure of modeling the interaction of multiple relations. Experiments are conducted to evaluate the proposed method with two real-world datasets, and demonstrate that the HeteGAD outperforms 11 state-of-the-art baselines for fraud detection.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107407"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Restarted multiple kernel algorithms with self-guiding for large-scale multi-view clustering
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-22 DOI: 10.1016/j.neunet.2025.107409
Yongyan Guo, Gang Wu
{"title":"Restarted multiple kernel algorithms with self-guiding for large-scale multi-view clustering","authors":"Yongyan Guo,&nbsp;Gang Wu","doi":"10.1016/j.neunet.2025.107409","DOIUrl":"10.1016/j.neunet.2025.107409","url":null,"abstract":"<div><div>Multi-view clustering is a powerful approach for discovering underlying structures hidden behind diverse views of datasets. Most existing multi-view spectral clustering methods use fixed similarity matrices or alternately updated ones. However, the former often fall short in adaptively capturing relationships among different views, while the latter are often time-consuming and even impractical for large-scale datasets. To the best of our knowledge, there are no multi-view spectral clustering methods can both construct multi-view similarity matrices inexpensively and preserve the valuable clustering insights from previous cycles at the same time. To fill in this gap, we present a Sum-Ratio Multi-view Ncut model that share a common representation embedding for multi-view data. Based on this model, we propose a restarted multi-view multiple kernel clustering framework with self-guiding. To release the overhead, we use similarity matrices with strict block diagonal representation, and present an efficient multiple kernel selection technique. Comprehensive experiments on benchmark multi-view datasets demonstrate that, even using randomly generated initial guesses, the restarted algorithms can improve the clustering performances by 5–10 times for some popular multi-view clustering methods. Specifically, our framework offers a potential boosting effect for most of the state-of-the-art multi-view clustering algorithms at very little cost, especially for those with poor performances.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107409"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motif and supernode-enhanced gated graph neural networks for session-based recommendation
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-21 DOI: 10.1016/j.neunet.2025.107406
Ronghua Lin , Chang Liu , Hao Zhong , Chengzhe Yuan , Guohua Chen , Yuncheng Jiang , Yong Tang
{"title":"Motif and supernode-enhanced gated graph neural networks for session-based recommendation","authors":"Ronghua Lin ,&nbsp;Chang Liu ,&nbsp;Hao Zhong ,&nbsp;Chengzhe Yuan ,&nbsp;Guohua Chen ,&nbsp;Yuncheng Jiang ,&nbsp;Yong Tang","doi":"10.1016/j.neunet.2025.107406","DOIUrl":"10.1016/j.neunet.2025.107406","url":null,"abstract":"<div><div>Session-based recommendation systems aim to predict users’ next interactions based on short-lived, anonymous sessions, a challenging yet vital task due to the sparsity and dynamic nature of user behavior. Existing Graph Neural Network (GNN)-based methods primarily focus on the session graphs while overlooking the influence of micro-structures and user behavior patterns. To address these limitations, we propose a Motif and Supernode-Enhanced Session-based Recommender System (MSERS), which constructs a global session graph, identifies and encodes motifs as supernodes, and reintegrates them into the global graph to enrich its topology and better represent item dependencies. By employing supernode-enhanced Gated Graph Neural Networks (GGNN), MSERS captures both long-term and latent item dependencies, significantly improving session representations. Extensive experiments on two real-world datasets demonstrate the superiority of MSERS over baseline methods, providing robust insights into the role of micro-structures in session-based recommendations.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107406"},"PeriodicalIF":6.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AAPMatcher: Adaptive attention pruning matcher for accurate local feature matching
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-03-21 DOI: 10.1016/j.neunet.2025.107403
Xuan Fan , Sijia Liu , Shuaiyan Liu , Lijun Zhao , Ruifeng Li
{"title":"AAPMatcher: Adaptive attention pruning matcher for accurate local feature matching","authors":"Xuan Fan ,&nbsp;Sijia Liu ,&nbsp;Shuaiyan Liu ,&nbsp;Lijun Zhao ,&nbsp;Ruifeng Li","doi":"10.1016/j.neunet.2025.107403","DOIUrl":"10.1016/j.neunet.2025.107403","url":null,"abstract":"<div><div>Local feature matching, which seeks to establish correspondences between two images, serves as a fundamental component in numerous computer vision applications, such as camera tracking and 3D mapping. Recently, Transformer has demonstrated remarkable capability in modeling accurate correspondences for the two input sequences owing to its long-range context integration capability. Whereas, indiscriminate modeling in traditional transformers inevitably introduces noise and includes irrelevant information which can degrade the quality of feature representations. Towards this end, we introduce an <em>adaptive attention pruning matcher for accurate local feature matching (AAPMatcher)</em>, which is designed for robust and accurate local feature matching. We overhaul the traditional uniform feature extraction for sequences by introducing the adaptive pruned transformer (APFormer), which adaptively retains the most profitable attention values for feature consolidation, enabling the network to obtain more useful feature information while filtering out useless information. Moreover, considering the fixed combination of self- and cross-APFormer greatly limits the flexibility of the network, we propose a two-stage <em>adaptive hybrid attention strategy (AHAS)</em>, which achieves the optimal combination for APFormers in a coarse to fine manner. Benefiting from the clean feature representations and the optimal combination of APFormers, AAPMatcher exceeds the state-of-the-art approaches over multiple benchmarks, including pose estimation, homography estimation, and visual localization.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107403"},"PeriodicalIF":6.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信