Neural Networks最新文献

筛选
英文 中文
Stabilizing sequence learning in stochastic spiking networks with GABA-Modulated STDP. 利用 GABA 调制 STDP 稳定随机尖峰网络中的序列学习
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-07 DOI: 10.1016/j.neunet.2024.106985
Marius Vieth, Jochen Triesch
{"title":"Stabilizing sequence learning in stochastic spiking networks with GABA-Modulated STDP.","authors":"Marius Vieth, Jochen Triesch","doi":"10.1016/j.neunet.2024.106985","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106985","url":null,"abstract":"<p><p>Cortical networks are capable of unsupervised learning and spontaneous replay of complex temporal sequences. Endowing artificial spiking neural networks with similar learning abilities remains a challenge. In particular, it is unresolved how different plasticity rules can contribute to both learning and the maintenance of network stability during learning. Here we introduce a biologically inspired form of GABA-Modulated Spike Timing-Dependent Plasticity (GMS) and demonstrate its ability to permit stable learning of complex temporal sequences including natural language in recurrent spiking neural networks. Motivated by biological findings, GMS utilizes the momentary level of inhibition onto excitatory cells to adjust both the magnitude and sign of Spike Timing-Dependent Plasticity (STDP) of connections between excitatory cells. In particular, high levels of inhibition in the network cause depression of excitatory-to-excitatory connections. We demonstrate the effectiveness of this mechanism during several sequence learning experiments with character- and token-based text inputs as well as visual input sequences. We show that GMS maintains stability during learning and spontaneous replay and permits the network to form a clustered hierarchical representation of its input sequences. Overall, we provide a biologically inspired model of unsupervised learning of complex sequences in recurrent spiking neural networks.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106985"},"PeriodicalIF":6.0,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-distillation improves self-supervised learning for DNA sequence inference.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-07 DOI: 10.1016/j.neunet.2024.106978
Tong Yu, Lei Cheng, Ruslan Khalitov, Erland B Olsson, Zhirong Yang
{"title":"Self-distillation improves self-supervised learning for DNA sequence inference.","authors":"Tong Yu, Lei Cheng, Ruslan Khalitov, Erland B Olsson, Zhirong Yang","doi":"10.1016/j.neunet.2024.106978","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106978","url":null,"abstract":"<p><p>Self-supervised Learning (SSL) has been recognized as a method to enhance prediction accuracy in various downstream tasks. However, its efficacy for DNA sequences remains somewhat constrained. This limitation stems primarily from the fact that most existing SSL approaches in genomics focus on masked language modeling of individual sequences, neglecting the crucial aspect of encoding statistics across multiple sequences. To overcome this challenge, we introduce an innovative deep neural network model, which incorporates collaborative learning between a 'student' and a 'teacher' subnetwork. In this model, the student subnetwork employs masked learning on nucleotides and progressively adapts its parameters to the teacher subnetwork through an exponential moving average approach. Concurrently, both subnetworks engage in contrastive learning, deriving insights from two augmented representations of the input sequences. This self-distillation process enables our model to effectively assimilate both contextual information from individual sequences and distributional data across the sequence population. We validated our approach with preliminary pretraining using the human reference genome, followed by applying it to 20 downstream inference tasks. The empirical results from these experiments demonstrate that our novel method significantly boosts inference performance across the majority of these tasks. Our code is available at https://github.com/wiedersehne/FinDNA.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106978"},"PeriodicalIF":6.0,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancements in exponential synchronization and encryption techniques: Quaternion-Valued Artificial Neural Networks with two-sided coefficients. 指数同步和加密技术的进展:具有双面系数的四元数值人工神经网络
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-06 DOI: 10.1016/j.neunet.2024.106982
Chenyang Li, Kit Ian Kou, Yanlin Zhang, Yang Liu
{"title":"Advancements in exponential synchronization and encryption techniques: Quaternion-Valued Artificial Neural Networks with two-sided coefficients.","authors":"Chenyang Li, Kit Ian Kou, Yanlin Zhang, Yang Liu","doi":"10.1016/j.neunet.2024.106982","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106982","url":null,"abstract":"<p><p>This paper presents cutting-edge advancements in exponential synchronization and encryption techniques, focusing on Quaternion-Valued Artificial Neural Networks (QVANNs) that incorporate two-sided coefficients. The study introduces a novel approach that harnesses the Cayley-Dickson representation method to simplify the complex equations inherent in QVANNs, thereby enhancing computational efficiency by exploiting complex number properties. The study employs the Lyapunov theorem to craft a resilient control system, showcasing its exponential synchronization by skillfully regulating the Lyapunov function and its derivatives. This management ensures the stability and synchronization of the network, which is crucial for reliable performance in various applications. Extensive numerical simulations are conducted to substantiate the theoretical framework, providing empirical evidence supporting the presented design and proofs. Furthermore, the paper explores the practical application of QVANNs in the encryption and decryption of color images, showcasing the network's capability to handle complex data processing tasks efficiently. The findings of this research not only contribute significantly to the development of complex artificial neural networks but pave the way for further exploration into systems with diverse delay types.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106982"},"PeriodicalIF":6.0,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delayed knowledge transfer: Cross-modal knowledge transfer from delayed stimulus to EEG for continuous attention detection based on spike-represented EEG signals. 延迟知识转移:从延迟刺激到脑电图的跨模态知识转移,用于基于尖峰呈现的脑电图信号的持续注意力检测。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-06 DOI: 10.1016/j.neunet.2024.107003
Pengfei Sun, Jorg De Winne, Malu Zhang, Paul Devos, Dick Botteldooren
{"title":"Delayed knowledge transfer: Cross-modal knowledge transfer from delayed stimulus to EEG for continuous attention detection based on spike-represented EEG signals.","authors":"Pengfei Sun, Jorg De Winne, Malu Zhang, Paul Devos, Dick Botteldooren","doi":"10.1016/j.neunet.2024.107003","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107003","url":null,"abstract":"<p><p>Decoding visual and auditory stimuli from brain activities, such as electroencephalography (EEG), offers promising advancements for enhancing machine-to-human interaction. However, effectively representing EEG signals remains a significant challenge. In this paper, we introduce a novel Delayed Knowledge Transfer (DKT) framework that employs spiking neurons for attention detection, using our experimental EEG dataset. This framework extracts patterns from audiovisual stimuli to model brain responses in EEG signals, while accounting for inherent response delays. By aligning audiovisual features with EEG signals through a shared embedding space, our approach improves the performance of brain-computer interface (BCI) systems. We also present WithMeAttention, a multimodal dataset designed to facilitate research in continuously distinguishing between target and distractor responses. Our methodology demonstrates a 3% improvement in accuracy on the WithMeAttention dataset compared to a baseline model that decodes EEG signals from scratch. This significant performance increase highlights the effectiveness of our approach Comprehensive analysis across four distinct conditions shows that rhythmic enhancement of visual information can optimize multi-sensory information processing. Notably, the two conditions featuring rhythmic target presentation - with and without accompanying beeps - achieved significantly superior performance compared to other scenarios. Furthermore, the delay distribution observed under different conditions indicates that our delay layer effectively emulates the neural processing delays in response to stimuli.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107003"},"PeriodicalIF":6.0,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised pre-trained neural network for quantum natural language processing.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-06 DOI: 10.1016/j.neunet.2024.107004
Ben Yao, Prayag Tiwari, Qiuchi Li
{"title":"Self-supervised pre-trained neural network for quantum natural language processing.","authors":"Ben Yao, Prayag Tiwari, Qiuchi Li","doi":"10.1016/j.neunet.2024.107004","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107004","url":null,"abstract":"<p><p>Quantum computing models have propelled advances in many application domains. However, in the field of natural language processing (NLP), quantum computing models are limited in representation capacity due to the high linearity of the underlying quantum computing architecture. This work attempts to address this limitation by leveraging the concept of self-supervised pre-training, a paradigm that has been propelling the rocketing development of NLP, to increase the power of quantum NLP models on the representation level. Specifically, we present a self-supervised pre-training approach to train quantum encodings of sentences, and fine-tune quantum circuits for downstream tasks on its basis. Experiments show that pre-trained mechanism brings remarkable improvement over end-to-end pure quantum models, yielding meaningful prediction results on a variety of downstream text classification datasets.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107004"},"PeriodicalIF":6.0,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142822891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DropNaE: Alleviating irregularity for large-scale graph representation learning. DropNaE:为大规模图形表示学习减轻不规则性。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-06 DOI: 10.1016/j.neunet.2024.106930
Xin Liu, Xunbin Xiong, Mingyu Yan, Runzhen Xue, Shirui Pan, Songwen Pei, Lei Deng, Xiaochun Ye, Dongrui Fan
{"title":"DropNaE: Alleviating irregularity for large-scale graph representation learning.","authors":"Xin Liu, Xunbin Xiong, Mingyu Yan, Runzhen Xue, Shirui Pan, Songwen Pei, Lei Deng, Xiaochun Ye, Dongrui Fan","doi":"10.1016/j.neunet.2024.106930","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106930","url":null,"abstract":"<p><p>Large-scale graphs are prevalent in various real-world scenarios and can be effectively processed using Graph Neural Networks (GNNs) on GPUs to derive meaningful representations. However, the inherent irregularity found in real-world graphs poses challenges for leveraging the single-instruction multiple-data execution mode of GPUs, leading to inefficiencies in GNN training. In this paper, we try to alleviate this irregularity at its origin-the irregular graph data itself. To this end, we propose DropNaE to alleviate the irregularity in large-scale graphs by conditionally dropping nodes and edges before GNN training. Specifically, we first present a metric to quantify the neighbor heterophily of all nodes in a graph. Then, we propose DropNaE containing two variants to transform the irregular degree distribution of the large-scale graph to a uniform one, based on the proposed metric. Experiments show that DropNaE is highly compatible and can be integrated into popular GNNs to promote both training efficiency and accuracy of used GNNs. DropNaE is offline performed and requires no online computing resources, benefiting the state-of-the-art GNNs in the present and future to a significant extent.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106930"},"PeriodicalIF":6.0,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M4Net: Multi-level multi-patch multi-receptive multi-dimensional attention network for infrared small target detection.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-05 DOI: 10.1016/j.neunet.2024.107026
Fan Zhang, Huilin Hu, Biyu Zou, Meizu Luo
{"title":"M4Net: Multi-level multi-patch multi-receptive multi-dimensional attention network for infrared small target detection.","authors":"Fan Zhang, Huilin Hu, Biyu Zou, Meizu Luo","doi":"10.1016/j.neunet.2024.107026","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107026","url":null,"abstract":"<p><p>The detection of infrared small targets is getting more and more attention, and has a wider application in both military and civilian fields. The traditional infrared small target detection methods heavily rely on the setting of manual features, and the deep learning-based method easily lose the targets in deep layers due to several downsampling operations. To handle this problem, we design multi-level multi-patch multi-receptive multi-dimensional attention network (M4Net) to achieve information interaction among high-level and low-level features for maintaining target contour and location detail. Multi-level feature extraction module (MFEM) with multilayer vision transformer (ViT) is introduced under the encoder-decoder framework to fuse multi-scale features. Multi-patch attention module (MPAM) and multi-receptive field module (MRFM) are proposed to capture and enhance the feature information. Multi-dimension interactive module (MDIM) is designed to connect the attention mechanism on multiscale features to enhance the network's leaning ability. Finally, the extensive experiments carried out on infrared small target detection dataset demonstrate that our method achieves better performance compared to other methods.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107026"},"PeriodicalIF":6.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-tower model with semantic perception and timespan-coupled hypergraph for next-basket recommendation.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-05 DOI: 10.1016/j.neunet.2024.107001
Yangtao Zhou, Hua Chu, Qingshan Li, Jianan Li, Shuai Zhang, Feifei Zhu, Jingzhao Hu, Luqiao Wang, Wanqiang Yang
{"title":"Dual-tower model with semantic perception and timespan-coupled hypergraph for next-basket recommendation.","authors":"Yangtao Zhou, Hua Chu, Qingshan Li, Jianan Li, Shuai Zhang, Feifei Zhu, Jingzhao Hu, Luqiao Wang, Wanqiang Yang","doi":"10.1016/j.neunet.2024.107001","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107001","url":null,"abstract":"<p><p>Next basket recommendation (NBR) is an essential task within the realm of recommendation systems and is dedicated to the anticipation of user preferences in the next moment based on the analysis of users' historical sequences of engaged baskets. Current NBR models utilise unique identity (ID) information to represent distinct users and items and focus on capturing the dynamic preferences of users through sequential encoding techniques such as recurrent neural networks and hierarchical time decay modelling, which have dominated the NBR field more than a decade. However, these models exhibit two significant limitations, resulting in suboptimal representations for both users and items. First, the dependence on unique ID information for the derivation of user and item representations ignores the rich semantic relations that interweave the items. Second, the majority of NBR models remain bound to model an individual user's historical basket sequence, thereby neglecting the broader vista of global collaborative relations among users and items. To address these limitations, we introduce a dual-tower model with semantic perception and timespan-coupled hypergraph for the NBR. It is carefully designed to integrate semantic and collaborative relations into both user and item representations. Specifically, to capture rich semantic relations effectively, we propose a hierarchical semantic attention mechanism with a large language model to integrate multi-aspect textual semantic features of items for basket representation learning. Simultaneously, to capture global collaborative relations explicitly, we design a timespan-coupled hypergraph convolutional network to efficiently model high-order structural connectivity on a hypergraph among users and items. Finally, a multi-objective joint optimisation loss is used to optimise the learning and integration of semantic and collaborative relations for recommendation. Comprehensive experiments on two public datasets demonstrate that our proposed model significantly outperforms the mainstream NBR models on two classical evaluation metrics, Recall and Normalised Discounted Cumulative Gain (NDCG).</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107001"},"PeriodicalIF":6.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142822890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An extrapolation-driven network architecture for physics-informed deep learning.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-05 DOI: 10.1016/j.neunet.2024.106998
Yong Wang, Yanzhong Yao, Zhiming Gao
{"title":"An extrapolation-driven network architecture for physics-informed deep learning.","authors":"Yong Wang, Yanzhong Yao, Zhiming Gao","doi":"10.1016/j.neunet.2024.106998","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106998","url":null,"abstract":"<p><p>Current physics-informed neural network (PINN) implementations with sequential learning strategies often experience some weaknesses, such as the failure to reproduce the previous training results when using a single network, the difficulty to strictly ensure continuity and smoothness at the time interval nodes when using multiple networks, and the increase in complexity and computational overhead. To overcome these shortcomings, we first investigate the extrapolation capability of the PINN method for time-dependent PDEs. Taking advantage of this extrapolation property, we generalize the training result obtained in a specific time subinterval to larger intervals by adding a correction term to the network parameters of the subinterval. The correction term is determined by further training with the sample points in the added subinterval. Secondly, by designing an extrapolation control function with special characteristics and combining it with a correction term, we construct a new neural network architecture whose network parameters are coupled with the time variable, which we call the extrapolation-driven network architecture. Based on this architecture, using a single neural network, we can obtain the overall PINN solution of the whole domain with the following two characteristics: (1) it completely inherits the local solution of the interval obtained from the previous training, (2) at the interval node, it strictly maintains the continuity and smoothness that the true solution has. The extrapolation-driven network architecture allows us to divide a large time domain into multiple subintervals and solve the time-dependent PDEs one by one in a chronological order. This training scheme respects the causality principle and effectively overcomes the difficulties of the conventional PINN method in solving the evolution equation on a large time domain. Numerical experiments verify the performance of our method. The data and code accompanying this paper are available at https://github.com/wangyong1301108/E-DNN.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106998"},"PeriodicalIF":6.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable exercise recommendation with knowledge graph. 利用知识图谱提出可解释的练习建议。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-05 DOI: 10.1016/j.neunet.2024.106954
Quanlong Guan, Xinghe Cheng, Fang Xiao, Zhuzhou Li, Chaobo He, Liangda Fang, Guanliang Chen, Zhiguo Gong, Weiqi Luo
{"title":"Explainable exercise recommendation with knowledge graph.","authors":"Quanlong Guan, Xinghe Cheng, Fang Xiao, Zhuzhou Li, Chaobo He, Liangda Fang, Guanliang Chen, Zhiguo Gong, Weiqi Luo","doi":"10.1016/j.neunet.2024.106954","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106954","url":null,"abstract":"<p><p>Recommending suitable exercises and providing the reasons for these recommendations is a highly valuable task, as it can significantly improve students' learning efficiency. Nevertheless, the extensive range of exercise resources and the diverse learning capacities of students present a notable difficulty in recommending exercises. Collaborative filtering approaches frequently have difficulties in recommending suitable exercises, whereas deep learning methods lack explanation, which restricts their practical use. To address these issue, this paper proposes KG4EER, an explainable exercise recommendation with a knowledge graph. KG4EER facilitates the matching of various students with suitable exercises and offers explanations for its recommendations. More precisely, a feature extraction module is introduced to represent students' learning features, and a knowledge graph is constructed to recommend exercises. This knowledge graph, which includes three primary entities - knowledge concepts, students, and exercises - and their interrelationships, serves to recommend suitable exercises. Extensive experiments conducted on three real-world datasets, coupled with expert interviews, establish the superiority of KG4EER over existing baseline methods and underscore its robust explainability.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106954"},"PeriodicalIF":6.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信