Neural Processing Letters最新文献

筛选
英文 中文
Finding Efficient Graph Embeddings and Processing them by a CNN-based Tool 通过基于 CNN 的工具寻找高效图嵌入并对其进行处理
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-09-02 DOI: 10.1007/s11063-024-11683-0
Attila Tiba, Andras Hajdu, Tamas Giraszi
{"title":"Finding Efficient Graph Embeddings and Processing them by a CNN-based Tool","authors":"Attila Tiba, Andras Hajdu, Tamas Giraszi","doi":"10.1007/s11063-024-11683-0","DOIUrl":"https://doi.org/10.1007/s11063-024-11683-0","url":null,"abstract":"<p>We introduce new tools to support finding efficient graph embedding techniques for graph databases and to process their outputs using deep learning for classification scenarios. Accordingly, we investigate the possibility of creating an ensemble of different graph embedding methods to raise accuracy and present an interconnected neural network-based ensemble to increase the efficiency of the member classification algorithms. We also introduce a new convolutional neural network-based architecture that can be generally proposed to process vectorized graph data provided by various graph embedding methods and compare it with other architectures in the literature to show the competitiveness of our approach. We also exhibit a statistical-based inhomogeneity level estimation procedure to select the optimal embedding for a given graph database efficiently. The efficiency of our framework is exhaustively tested using several publicly available graph datasets and numerous state-of-the-art graph embedding techniques. Our experimental results for classification tasks have proved the competitiveness of our approach by outperforming the state-of-the-art frameworks.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training Artificial Neural Network with a Cultural Algorithm 用文化算法训练人工神经网络
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-27 DOI: 10.1007/s11063-024-11636-7
Kübra Tümay Ateş, İbrahim Erdem Kalkan, Cenk Şahin
{"title":"Training Artificial Neural Network with a Cultural Algorithm","authors":"Kübra Tümay Ateş, İbrahim Erdem Kalkan, Cenk Şahin","doi":"10.1007/s11063-024-11636-7","DOIUrl":"https://doi.org/10.1007/s11063-024-11636-7","url":null,"abstract":"<p>Artificial neural networks are amongst the artificial intelligence techniques with their ability to provide machines with some functionalities such as decision making, comparison, and forecasting. They are known for having the capability of forecasting issues in real-world problems. Their acquired knowledge is stored in the interconnection strengths or weights of neurons through an optimization system known as learning. Several limitations have been identified with commonly used gradient-based optimization algorithms, including the risk of premature convergence, the sensitivity of initial parameters and positions, and the potential for getting trapped in local optima. Various meta-heuristics are proposed in the literature as alternative training algorithms to mitigate these limitations. Therefore, the primary aim of this study is to combine a feed-forward artificial neural network (ANN) with a cultural algorithm (CA) as a meta-heuristic, aiming to establish an efficient and dependable training system in comparison to existing methods. The proposed artificial neural network system (ANN-CA) evaluated its performance on classification tasks over nine benchmark datasets: Iris, Pima Indians Diabetes, Thyroid Disease, Breast Cancer Wisconsin, Credit Approval, Glass Identification, SPECT Heart, Wine and Balloon. The overall experimental results indicate that the proposed method outperforms other methods included in the comparative analysis by approximately 12% in terms of classification error and approximately 7% in terms of accuracy.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lagrange Stability of Competitive Neural Networks with Multiple Time-Varying Delays 具有多重时变延迟的竞争神经网络的拉格朗日稳定性
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-26 DOI: 10.1007/s11063-024-11667-0
Dandan Tang, Baoxian Wang, Jigui Jian, Caiqing Hao
{"title":"Lagrange Stability of Competitive Neural Networks with Multiple Time-Varying Delays","authors":"Dandan Tang, Baoxian Wang, Jigui Jian, Caiqing Hao","doi":"10.1007/s11063-024-11667-0","DOIUrl":"https://doi.org/10.1007/s11063-024-11667-0","url":null,"abstract":"<p>In this paper, the Lagrange stability of competitive neural networks (CNNs) with leakage delays and mixed time-varying delays is investigated. By constructing delay-dependent Lyapunov functional, combining inequality analysis technique, the delay-dependent Lagrange stability criterion are obtained in the form of linear matrix inequalities. And the corresponding global exponentially attractive set (GEAS) is obtained. On this basis, by exploring the relationship between the leakage delays and the discrete delay, a better GEAS of the system is obtained from the six different sizes of the two types of delays. Finally, three examples of numerical simulation are given to illustrate the effectiveness of the obtained results.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Hybrid Deep Learning Models for Enhanced Multivariate Time Series Forecasting 利用混合深度学习模型加强多变量时间序列预测
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-23 DOI: 10.1007/s11063-024-11656-3
Amal Mahmoud, Ammar Mohammed
{"title":"Leveraging Hybrid Deep Learning Models for Enhanced Multivariate Time Series Forecasting","authors":"Amal Mahmoud, Ammar Mohammed","doi":"10.1007/s11063-024-11656-3","DOIUrl":"https://doi.org/10.1007/s11063-024-11656-3","url":null,"abstract":"<p>Time series forecasting is crucial in various domains, ranging from finance and economics to weather prediction and supply chain management. Traditional statistical methods and machine learning models have been widely used for this task. However, they often face limitations in capturing complex temporal dependencies and handling multivariate time series data. In recent years, deep learning models have emerged as a promising solution for overcoming these limitations. This paper investigates how deep learning, specifically hybrid models, can enhance time series forecasting and address the shortcomings of traditional approaches. This dual capability handles intricate variable interdependencies and non-stationarities in multivariate forecasting. Our results show that the hybrid models achieved lower error rates and higher <span>(R^2)</span> values, signifying their superior predictive performance and generalization capabilities. These architectures effectively extract spatial features and temporal dynamics in multivariate time series by combining convolutional and recurrent modules. This study evaluates deep learning models, specifically hybrid architectures, for multivariate time series forecasting. On two real-world datasets - Traffic Volume and Air Quality - the TCN-BiLSTM model achieved the best overall performance. For Traffic Volume, the TCN-BiLSTM model achieved an <span>(R^2)</span> score of 0.976, and for Air Quality, it reached an <span>(R^2)</span> score of 0.94. These results highlight the model’s effectiveness in leveraging the strengths of Temporal Convolutional Networks (TCNs) for capturing multi-scale temporal patterns and Bidirectional Long Short-Term Memory (BiLSTMs) for retaining contextual information, thereby enhancing the accuracy of time series forecasting.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Siamese Tracking Network with Multi-attention Mechanism 具有多重关注机制的连体追踪网络
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-23 DOI: 10.1007/s11063-024-11670-5
Yuzhuo Xu, Ting Li, Bing Zhu, Fasheng Wang, Fuming Sun
{"title":"Siamese Tracking Network with Multi-attention Mechanism","authors":"Yuzhuo Xu, Ting Li, Bing Zhu, Fasheng Wang, Fuming Sun","doi":"10.1007/s11063-024-11670-5","DOIUrl":"https://doi.org/10.1007/s11063-024-11670-5","url":null,"abstract":"<p>Object trackers based on Siamese networks view tracking as a similarity-matching process. However, the correlation operation operates as a local linear matching process, limiting the tracker’s ability to capture the intricate nonlinear relationship between the template and search region branches. Moreover, most trackers don’t update the template and often use the first frame of an image as the initial template, which will easily lead to poor tracking performance of the algorithm when facing instances of deformation, scale variation, and occlusion of the tracking target. To this end, we propose a Simases tracking network with a multi-attention mechanism, including a template branch and a search branch. To adapt to changes in target appearance, we integrate dynamic templates and multi-attention mechanisms in the template branch to obtain more effective feature representation by fusing the features of initial templates and dynamic templates. To enhance the robustness of the tracking model, we utilize a multi-attention mechanism in the search branch that shares weights with the template branch to obtain multi-scale feature representation by fusing search region features at different scales. In addition, we design a lightweight and simple feature fusion mechanism, in which the Transformer encoder structure is utilized to fuse the information of the template area and search area, and the dynamic template is updated online based on confidence. Experimental results on publicly tracking datasets show that the proposed method achieves competitive results compared to several state-of-the-art trackers.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transfer-Learning-Like Neural Dynamics Algorithm for Arctic Sea Ice Extraction 用于北极海冰提取的类迁移学习神经动力学算法
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-14 DOI: 10.1007/s11063-024-11664-3
Bo Peng, Kefan Zhang, Long Jin, Mingsheng Shang
{"title":"A Transfer-Learning-Like Neural Dynamics Algorithm for Arctic Sea Ice Extraction","authors":"Bo Peng, Kefan Zhang, Long Jin, Mingsheng Shang","doi":"10.1007/s11063-024-11664-3","DOIUrl":"https://doi.org/10.1007/s11063-024-11664-3","url":null,"abstract":"<p>Sea ice plays a pivotal role in ocean-related research, necessitating the development of highly accurate and robust techniques for its extraction from diverse satellite remote sensing imagery. However, conventional learning methods face limitations due to the soaring cost and time associated with manually collecting sufficient sea ice data for model training. This paper introduces an innovative approach where Neural Dynamics (ND) algorithms are seamlessly integrated with a recurrent neural network, resulting in a Transfer-Learning-Like Neural Dynamics (TLLND) algorithm specifically tailored for sea ice extraction. Firstly, given the susceptibility of the image extraction process to noise in practical scenarios, an ND algorithm with noise tolerance and high extraction accuracy is proposed to address this challenge. Secondly, The internal coefficients of the ND algorithm are determined using a parametric method. Subsequently, the ND algorithm is formulated as a decoupled dynamical system. This enables the coefficients trained on a linear equation problem dataset to be directly generalized to solve the sea ice extraction challenges. Theoretical analysis ensures that the effectiveness of the proposed TLLND algorithm remains unaffected by the specific characteristics of various dataset. To validate its efficacy, robustness, and generalization performance, several comparative experiments are conducted using diverse Arctic sea ice satellite imagery with varying levels of noise. The outcomes of these experiments affirm the competence of the proposed TLLND algorithm in addressing the complexities associated with sea ice extraction.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Classification Based on Low-Level Feature Enhancement and Attention Mechanism 基于低级特征增强和注意机制的图像分类
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-13 DOI: 10.1007/s11063-024-11680-3
Yong Zhang, Xueqin Li, Wenyun Chen, Ying Zang
{"title":"Image Classification Based on Low-Level Feature Enhancement and Attention Mechanism","authors":"Yong Zhang, Xueqin Li, Wenyun Chen, Ying Zang","doi":"10.1007/s11063-024-11680-3","DOIUrl":"https://doi.org/10.1007/s11063-024-11680-3","url":null,"abstract":"<p>Deep learning-based image classification networks heavily rely on the extracted features. However, as the model becomes deeper, important features may be lost, resulting in decreased accuracy. To tackle this issue, this paper proposes an image classification method that enhances low-level features and incorporates an attention mechanism. The proposed method employs EfficientNet as the backbone network for feature extraction. Firstly, the Feature Enhancement Module quantifies and statistically processes low-level features from shallow layers, thereby enhancing the feature information. Secondly, the Convolutional Block Attention Module enhances the high-level features to improve the extraction of global features. Finally, the enhanced low-level features and global features are fused to supplement low-resolution global features with high-resolution details, further improving the model’s image classification ability. Experimental results illustrate that the proposed method achieves a Top-1 classification accuracy of 86.49% and a Top-5 classification accuracy of 96.90% on the ETH-Food101 dataset, 86.99% and 97.24% on the VireoFood-172 dataset, and 70.99% and 92.73% on the UEC-256 dataset. These results demonstrate that the proposed method outperforms existing methods in terms of classification performance.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kernel Extreme Learning Machine with Discriminative Transfer Feature and Instance Selection for Unsupervised Domain Adaptation 用于无监督领域适应的具有判别转移特征和实例选择功能的核极端学习机
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-13 DOI: 10.1007/s11063-024-11677-y
Shaofei Zang, Huimin Li, Nannan Lu, Chao Ma, Jiwei Gao, Jianwei Ma, Jinfeng Lv
{"title":"Kernel Extreme Learning Machine with Discriminative Transfer Feature and Instance Selection for Unsupervised Domain Adaptation","authors":"Shaofei Zang, Huimin Li, Nannan Lu, Chao Ma, Jiwei Gao, Jianwei Ma, Jinfeng Lv","doi":"10.1007/s11063-024-11677-y","DOIUrl":"https://doi.org/10.1007/s11063-024-11677-y","url":null,"abstract":"<p>The goal of domain adaptation (DA) is to develop a robust decision model on the source domain effectively generalize to the target domain data. State-of-the-art domain adaptation methods typically focus on finding an optimal inter-domain invariant feature representation or helpful instances from the source domain. In this paper, we propose a kernel extreme learning machine with discriminative transfer features and instance selection (KELM-DTF-IS) for unsupervised domain adaptation tasks, which consists of two steps: discriminative transfer feature extraction and classification with instance selection. At the feature extraction stage, we extend cross domain mean approximation(CDMA) by incorporating a penalty term and develop discriminative cross domain mean approximation (d-CDMA) to optimize the category separability between instances. Subsequently, d-CDMA is integrated into kernel ELM-AutoEncoder(KELM-AE) for extracting inter-domain invariant features. During the classification process, our approach uses CDMA metrics to compute a weights to each source instances based on their impact in reducing distribution differences between domains. Instances with a greater effect receive higher weights and vice versa. These weights are then used to distinguish and select source domain instances before incorporating them into weight KELM for proposing an adaptive classifier. Finally, we apply our approach to conduct classification experiments on publicly available domain adaptation datasets, and the results demonstrate its superiority over KELM and numerous other domain adaptation approaches.\u0000</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Stability Analysis of Neural Networks with Time Delay Based on Variable Augmented Free Weight Matrix 基于可变增量自由权重矩阵的带时延神经网络稳定性改进分析
IF 2.6 4区 计算机科学
Neural Processing Letters Pub Date : 2024-08-09 DOI: 10.1007/s11063-024-11628-7
FuDong Li, Wei Xie, WeiYi Zhu, ZongHao Shi
{"title":"Improved Stability Analysis of Neural Networks with Time Delay Based on Variable Augmented Free Weight Matrix","authors":"FuDong Li, Wei Xie, WeiYi Zhu, ZongHao Shi","doi":"10.1007/s11063-024-11628-7","DOIUrl":"https://doi.org/10.1007/s11063-024-11628-7","url":null,"abstract":"","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141922095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Neural Radiance Fields Using Near-Surface Sampling with Point Cloud Generation 利用点云生成近表面采样改进神经辐射场
IF 3.1 4区 计算机科学
Neural Processing Letters Pub Date : 2024-07-22 DOI: 10.1007/s11063-024-11654-5
Hye Bin Yoo, Hyun Min Han, Sung Soo Hwang, Il Yong Chun
{"title":"Improving Neural Radiance Fields Using Near-Surface Sampling with Point Cloud Generation","authors":"Hye Bin Yoo, Hyun Min Han, Sung Soo Hwang, Il Yong Chun","doi":"10.1007/s11063-024-11654-5","DOIUrl":"https://doi.org/10.1007/s11063-024-11654-5","url":null,"abstract":"<p>Neural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities. The disadvantage of NeRF is that it requires a long training time since it samples many 3D points. In addition, if one samples points from occluded regions or in the space where an object is unlikely to exist, the rendering quality of NeRF can be degraded. These issues can be solved by estimating the geometry of 3D scene. This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF. To this end, the proposed method estimates the surface of a 3D object using depth images of the training set and performs sampling only near the estimated surface. To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud. Experimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality, compared to the original NeRF and three different state-of-the-art NeRF methods. In addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141741210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信