Neural Networks最新文献

筛选
英文 中文
Complete synchronization of discrete-time fractional-order BAM neural networks with leakage and discrete delays 具有泄漏和离散延迟的离散时间分数阶 BAM 神经网络的完全同步化
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-05 DOI: 10.1016/j.neunet.2024.106705
{"title":"Complete synchronization of discrete-time fractional-order BAM neural networks with leakage and discrete delays","authors":"","doi":"10.1016/j.neunet.2024.106705","DOIUrl":"10.1016/j.neunet.2024.106705","url":null,"abstract":"<div><p>This paper concerns complete synchronization (CS) problem of discrete-time fractional-order BAM neural networks (BAMNNs) with leakage and discrete delays. Firstly, on the basis of Caputo fractional difference theory and nabla <span><math><mi>l</mi></math></span>-Laplace transform, two equations about the nabla sum are strictly proved. Secondly, two extended Halanay inequalities that are suitable for discrete-time fractional difference inequations with arbitrary initial time and multiple types of delays are introduced. In addition, through applying Caputo fractional difference theory and combining with inequalities gained from this paper, some sufficient CS criteria of discrete-time fractional-order BAMNNs with leakage and discrete delays are established under adaptive controller. Finally, one numerical simulation is utilized to certify the effectiveness of the obtained theoretical results.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GFANC-RL: Reinforcement Learning-based Generative Fixed-filter Active Noise Control GFANC-RL:基于强化学习的生成式固定滤波主动噪声控制
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-05 DOI: 10.1016/j.neunet.2024.106687
{"title":"GFANC-RL: Reinforcement Learning-based Generative Fixed-filter Active Noise Control","authors":"","doi":"10.1016/j.neunet.2024.106687","DOIUrl":"10.1016/j.neunet.2024.106687","url":null,"abstract":"<div><div>The recent Generative Fixed-filter Active Noise Control (GFANC) method achieves a good trade-off between noise reduction performance and system stability. However, labelling noise data for training the Convolutional Neural Network (CNN) in GFANC is typically resource-consuming. Even worse, labelling errors will degrade the CNN’s filter-generation accuracy. Therefore, this paper proposes a novel Reinforcement Learning-based GFANC (GFANC-RL) approach that omits the labelling process by leveraging the exploring property of Reinforcement Learning (RL). The CNN’s parameters are automatically updated through the interaction between the RL agent and the environment. Moreover, the RL algorithm solves the non-differentiability issue caused by using binary combination weights in GFANC. Simulation results demonstrate the effectiveness and transferability of the GFANC-RL method in handling real-recorded noises across different acoustic paths.<span><span><sup>2</sup></span></span></div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kolmogorov n-widths for multitask physics-informed machine learning (PIML) methods: Towards robust metrics 多任务物理信息机器学习(PIML)方法的柯尔莫哥洛夫 n 宽:实现稳健度量
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-04 DOI: 10.1016/j.neunet.2024.106703
{"title":"Kolmogorov n-widths for multitask physics-informed machine learning (PIML) methods: Towards robust metrics","authors":"","doi":"10.1016/j.neunet.2024.106703","DOIUrl":"10.1016/j.neunet.2024.106703","url":null,"abstract":"<div><p>Physics-informed machine learning (PIML) as a means of solving partial differential equations (PDEs) has garnered much attention in the Computational Science and Engineering (CS&amp;E) world. This topic encompasses a broad array of methods and models aimed at solving a single or a collection of PDE problems, called multitask learning. PIML is characterized by the incorporation of physical laws into the training process of machine learning models in lieu of large data when solving PDE problems. Despite the overall success of this collection of methods, it remains incredibly difficult to analyze, benchmark, and generally compare one approach to another. Using Kolmogorov n-widths as a measure of effectiveness of approximating functions, we judiciously apply this metric in the comparison of various multitask PIML architectures. We compute lower accuracy bounds and analyze the model’s learned basis functions on various PDE problems. This is the first objective metric for comparing multitask PIML architectures and helps remove uncertainty in model validation from selective sampling and overfitting. We also identify avenues of improvement for model architectures, such as the choice of activation function, which can drastically affect model generalization to “worst-case” scenarios, which is not observed when reporting task-specific errors. We also incorporate this metric into the optimization process through regularization, which improves the models’ generalizability over the multitask PDE problem.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
State transition learning with limited data for safe control of switched nonlinear systems 利用有限数据进行状态转换学习,实现开关非线性系统的安全控制
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-03 DOI: 10.1016/j.neunet.2024.106695
{"title":"State transition learning with limited data for safe control of switched nonlinear systems","authors":"","doi":"10.1016/j.neunet.2024.106695","DOIUrl":"10.1016/j.neunet.2024.106695","url":null,"abstract":"<div><p>Switching dynamics are prevalent in real-world systems, arising from either intrinsic changes or responses to external influences, which can be appropriately modeled by switched systems. Control synthesis for switched systems, especially integrating safety constraints, is recognized as a significant and challenging topic. This study focuses on devising a learning-based control strategy for switched nonlinear systems operating under arbitrary switching law. It aims to maintain stability and uphold safety constraints despite limited system data. To achieve these goals, we employ the control barrier function method and Lyapunov theory to synthesize a controller that delivers both safety and stability performance. To overcome the difficulties associated with constructing the specific control barrier and Lyapunov function and take advantage of switching characteristics, we create a neural control barrier function and a neural Lyapunov function separately for control policies through a state transition learning approach. These neural barrier and Lyapunov functions facilitate the design of the safe controller. The corresponding control policy is governed by learning from two components: policy loss and forward state estimation. The effectiveness of the developing scheme is verified through simulation examples.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0893608024006191/pdfft?md5=d2ae98134957c6fcb8db6c8185b3a468&pid=1-s2.0-S0893608024006191-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a configurable and non-hierarchical search space for NAS 为 NAS 开发可配置的非等级搜索空间
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-03 DOI: 10.1016/j.neunet.2024.106700
{"title":"Towards a configurable and non-hierarchical search space for NAS","authors":"","doi":"10.1016/j.neunet.2024.106700","DOIUrl":"10.1016/j.neunet.2024.106700","url":null,"abstract":"<div><p>Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding’s suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sample selection of adversarial attacks against traffic signs 针对交通标志的对抗性攻击样本选择
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-03 DOI: 10.1016/j.neunet.2024.106698
{"title":"Sample selection of adversarial attacks against traffic signs","authors":"","doi":"10.1016/j.neunet.2024.106698","DOIUrl":"10.1016/j.neunet.2024.106698","url":null,"abstract":"<div><p>In the real world, the correct recognition of traffic signs plays a crucial role in vehicle autonomous driving and traffic monitoring. The research on its adversarial attack can test the security of vehicle autonomous driving system and provide enlightenment for improving the recognition algorithm. However, with the development of transportation infrastructure, new traffic signs may be introduced. The adversarial attack model for traffic signs needs to adapt to the addition of new types. Based on this, class incremental learning for traffic sign adversarial attacks has become an interesting research field. We propose a class incremental learning method for adversarial attacks on traffic signs. First, this method uses Pinpoint Region Probability Estimation Network (PRPEN) to predict the probability of each pixel being attacked in old samples. It helps to identify the high attack probability regions of the samples. Subsequently, based on the size of high probability pixel concentration area, the replay sample set is constructed. Old samples with smaller concentration areas receive higher priority and are prioritized for incremental learning. The experimental results show that compared with other sample selection methods, our method selects more representative samples and can train PRPEN more effectively to generate probability maps, thereby better generating adversarial attacks on traffic signs.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local artifacts amplification for deepfakes augmentation 局部伪影放大,用于深层伪影增强。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-03 DOI: 10.1016/j.neunet.2024.106692
{"title":"Local artifacts amplification for deepfakes augmentation","authors":"","doi":"10.1016/j.neunet.2024.106692","DOIUrl":"10.1016/j.neunet.2024.106692","url":null,"abstract":"<div><p>With the rapid and continuous development of AIGC, It is becoming increasingly difficult to distinguish between real and forged facial images, which calls for efficient forgery detection systems. Although many detection methods have noticed the importance of local artifacts, there has been a lack of in-depth discussion regarding the selection of locations and their effective utilization. Besides, the traditional image augmentation methods that are widely used have limited improvements for forgery detection tasks and require more specialized augmentation methods specifically designed for forgery detection tasks. In this paper, this study proposes Local Artifacts Amplification for Deepfakes Augmentation, which amplifies the local artifacts on the forged faces. Furthermore, this study incorporates prior knowledge about similar facial features into the model. This means that within the facial regions defined in this work, forged features exhibit similar patterns. By aggregating the results from all facial regions, the study can enhance the overall performance of the model. The evaluation experiments conducted in this research, achieving an AUC of 93.40% and an Acc of 87.03% in the challenging WildDeepfake dataset, demonstrate a promising improvement in accuracy compared to traditional image augmentation methods and achieve superior performance on intra-dataset evaluation. The cross-dataset evaluation also showed that the method presented in this study has strong generalization abilities.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved region proposal network for enhanced few-shot object detection 改进的区域建议网络,用于增强少镜头物体检测。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-03 DOI: 10.1016/j.neunet.2024.106699
{"title":"Improved region proposal network for enhanced few-shot object detection","authors":"","doi":"10.1016/j.neunet.2024.106699","DOIUrl":"10.1016/j.neunet.2024.106699","url":null,"abstract":"<div><p>Despite significant success of deep learning in object detection tasks, the standard training of deep neural networks requires access to a substantial quantity of annotated images across all classes. Data annotation is an arduous and time-consuming endeavor, particularly when dealing with infrequent objects. Few-shot object detection (FSOD) methods have emerged as a solution to the limitations of classic object detection approaches based on deep learning. FSOD methods demonstrate remarkable performance by achieving robust object detection using a significantly smaller amount of training data. A challenge for FSOD is that instances from novel classes that do not belong to the fixed set of training classes appear in the background and the base model may pick them up as potential objects. These objects behave similarly to label noise because they are classified as one of the training dataset classes, leading to FSOD performance degradation. We develop a semi-supervised algorithm to detect and then utilize these unlabeled novel objects as positive samples during the FSOD training stage to improve FSOD performance. Specifically, we develop a hierarchical ternary classification region proposal network (HTRPN) to localize the potential unlabeled novel objects and assign them new objectness labels to distinguish these objects from the base training dataset classes. Our improved hierarchical sampling strategy for the region proposal network (RPN) also boosts the perception ability of the object detection model for large objects. We test our approach and COCO and PASCAL VOC baselines that are commonly used in FSOD literature. Our experimental results indicate that our method is effective and outperforms the existing state-of-the-art (SOTA) FSOD methods. Our implementation is provided as a supplement to support reproducibility of the results <span><span>https://github.com/zshanggu/HTRPN</span><svg><path></path></svg></span>.<span><span><sup>1</sup></span></span></p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing SNN-based spatio-temporal learning: A benchmark dataset and Cross-Modality Attention model 加强基于 SNN 的时空学习:基准数据集和跨模态注意力模型
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-03 DOI: 10.1016/j.neunet.2024.106677
{"title":"Enhancing SNN-based spatio-temporal learning: A benchmark dataset and Cross-Modality Attention model","authors":"","doi":"10.1016/j.neunet.2024.106677","DOIUrl":"10.1016/j.neunet.2024.106677","url":null,"abstract":"<div><p>Spiking Neural Networks (SNNs), renowned for their low power consumption, brain-inspired architecture, and spatio-temporal representation capabilities, have garnered considerable attention in recent years. Similar to Artificial Neural Networks (ANNs), high-quality benchmark datasets are of great importance to the advances of SNNs. However, our analysis indicates that many prevalent neuromorphic datasets lack strong temporal correlation, preventing SNNs from fully exploiting their spatio-temporal representation capabilities. Meanwhile, the integration of event and frame modalities offers more comprehensive visual spatio-temporal information. Yet, the SNN-based cross-modality fusion remains underexplored.</p><p>In this work, we present a neuromorphic dataset called DVS-SLR that can better exploit the inherent spatio-temporal properties of SNNs. Compared to existing datasets, it offers advantages in terms of higher temporal correlation, larger scale, and more varied scenarios. In addition, our neuromorphic dataset contains corresponding frame data, which can be used for developing SNN-based fusion methods. By virtue of the dual-modal feature of the dataset, we propose a Cross-Modality Attention (CMA) based fusion method. The CMA model efficiently utilizes the unique advantages of each modality, allowing for SNNs to learn both temporal and spatial attention scores from the spatio-temporal features of event and frame modalities, subsequently allocating these scores across modalities to enhance their synergy. Experimental results demonstrate that our method not only improves recognition accuracy but also ensures robustness across diverse scenarios.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0893608024006014/pdfft?md5=98c81b95c17e5a1fd818cad27f177fa6&pid=1-s2.0-S0893608024006014-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-optimal open-loop set stabilization of Boolean control networks 布尔控制网络的时间最优开环集稳定
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-03 DOI: 10.1016/j.neunet.2024.106694
{"title":"Time-optimal open-loop set stabilization of Boolean control networks","authors":"","doi":"10.1016/j.neunet.2024.106694","DOIUrl":"10.1016/j.neunet.2024.106694","url":null,"abstract":"<div><p>We show that for stabilization of Boolean control networks (BCNs) with unobservable initial states, open-loop control and close-loop control are not equivalent. An example is given to illustrate the nonequivalence. Enlightened by the nonequivalence, we explore open-loop set stabilization of BCNs with unobservable initial states. More specifically, this issue is to investigate that for a given BCN, whether there exists a unified free control sequence that is effective for all initial states of the system to stabilize the system states to a given set. The criteria for open-loop set stabilization is derived and for any open-loop set stabilizable BCN, every time-optimal open-loop set stabilizer is proposed. Besides, we obtain the least upper bounds of two integers, which are respectively related to the global stabilization and partial stabilization of BCNs in the results of two literature articles. Using the methods in the two literature articles, the least upper bounds of the two integers cannot be obtained.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信