IEEE transactions on neural networks and learning systems最新文献

筛选
英文 中文
EASpace: Enhanced Action Space for Policy Transfer. EASpace:增强政策转移的行动空间。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-07 DOI: 10.1109/TNNLS.2023.3322591
Zheng Zhang, Qingrui Zhang, Bo Zhu, Xiaohan Wang, Tianjiang Hu
{"title":"EASpace: Enhanced Action Space for Policy Transfer.","authors":"Zheng Zhang, Qingrui Zhang, Bo Zhu, Xiaohan Wang, Tianjiang Hu","doi":"10.1109/TNNLS.2023.3322591","DOIUrl":"10.1109/TNNLS.2023.3322591","url":null,"abstract":"<p><p>Formulating expert policies as macro actions promises to alleviate the long-horizon issue via structured exploration and efficient credit assignment. However, traditional option-based multipolicy transfer methods suffer from inefficient exploration of macro action's length and insufficient exploitation of useful long-duration macro actions. In this article, a novel algorithm named enhanced action space (EASpace) is proposed, which formulates macro actions in an alternative form to accelerate the learning process using multiple available suboptimal expert policies. Specifically, EASpace formulates each expert policy into multiple macro actions with different execution times. All the macro actions are then integrated into the primitive action space directly. An intrinsic reward, which is proportional to the execution time of macro actions, is introduced to encourage the exploitation of useful macro actions. The corresponding learning rule that is similar to intraoption Q-learning is employed to improve the data efficiency. Theoretical analysis is presented to show the convergence of the proposed learning rule. The efficiency of EASpace is illustrated by a grid-based game and a multiagent pursuit problem. The proposed algorithm is also implemented in physical systems to validate its effectiveness.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Celebrating Diversity With Subtask Specialization in Shared Multiagent Reinforcement Learning. 在共享多智能体强化学习中用子任务专业化来庆祝多样性。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-07 DOI: 10.1109/TNNLS.2023.3326744
Chenghao Li, Tonghan Wang, Chengjie Wu, Qianchuan Zhao, Jun Yang, Chongjie Zhang
{"title":"Celebrating Diversity With Subtask Specialization in Shared Multiagent Reinforcement Learning.","authors":"Chenghao Li, Tonghan Wang, Chengjie Wu, Qianchuan Zhao, Jun Yang, Chongjie Zhang","doi":"10.1109/TNNLS.2023.3326744","DOIUrl":"10.1109/TNNLS.2023.3326744","url":null,"abstract":"<p><p>Subtask decomposition offers a promising approach for achieving and comprehending complex cooperative behaviors in multiagent systems. Nonetheless, existing methods often depend on intricate high-level strategies, which can hinder interpretability and learning efficiency. To tackle these challenges, we propose a novel approach that specializes subtasks for subgroups by employing diverse observation representation encoders within information bottlenecks. Moreover, to enhance the efficiency of subtask specialization while promoting sophisticated cooperation, we introduce diversity in both optimization and neural network architectures. These advancements enable our method to achieve state-of-the-art performance and offer interpretable subtask factorization across various scenarios in Google Research Football (GRF).</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Optimal Path Search With Task-Dependent Learning in a Neural Network. 在神经网络中将最优路径搜索与任务相关学习相结合。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-07 DOI: 10.1109/TNNLS.2023.3327103
Tomas Kulvicius, Minija Tamosiunaite, Florentin Worgotter
{"title":"Combining Optimal Path Search With Task-Dependent Learning in a Neural Network.","authors":"Tomas Kulvicius, Minija Tamosiunaite, Florentin Worgotter","doi":"10.1109/TNNLS.2023.3327103","DOIUrl":"10.1109/TNNLS.2023.3327103","url":null,"abstract":"<p><p>Finding optimal paths in connected graphs requires determining the smallest total cost for traveling along the graph's edges. This problem can be solved by several classical algorithms, where, usually, costs are predefined for all edges. Conventional planning methods can, thus, normally not be used when wanting to change costs in an adaptive way following the requirements of some task. Here, we show that one can define a neural network representation of path-finding problems by transforming cost values into synaptic weights, which allows for online weight adaptation using network learning mechanisms. When starting with an initial activity value of one, activity propagation in this network will lead to solutions, which are identical to those found by the Bellman-Ford (BF) algorithm. The neural network has the same algorithmic complexity as BF, and, in addition, we can show that network learning mechanisms (such as Hebbian learning) can adapt the weights in the network augmenting the resulting paths according to some task at hand. We demonstrate this by learning to navigate in an environment with obstacles as well as by learning to follow certain sequences of path nodes. Hence, the here-presented novel algorithm may open up a different regime of applications where path augmentation (by learning) is directly coupled with path finding in a natural way.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hundreds Guide Millions: Adaptive Offline Reinforcement Learning With Expert Guidance 数百指导数百万:专家指导下的自适应离线强化学习。
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-07 DOI: 10.1109/TNNLS.2023.3293508
Qisen Yang;Shenzhi Wang;Qihang Zhang;Gao Huang;Shiji Song
{"title":"Hundreds Guide Millions: Adaptive Offline Reinforcement Learning With Expert Guidance","authors":"Qisen Yang;Shenzhi Wang;Qihang Zhang;Gao Huang;Shiji Song","doi":"10.1109/TNNLS.2023.3293508","DOIUrl":"10.1109/TNNLS.2023.3293508","url":null,"abstract":"Offline reinforcement learning (RL) optimizes the policy on a previously collected dataset without any interactions with the environment, yet usually suffers from the distributional shift problem. To mitigate this issue, a typical solution is to impose a policy constraint on a policy improvement objective. However, existing methods generally adopt a “one-size-fits-all” practice, i.e., keeping only a single improvement-constraint balance for all the samples in a mini-batch or even the entire offline dataset. In this work, we argue that different samples should be treated with different policy constraint intensities. Based on this idea, a novel plug-in approach named guided offline RL (GORL) is proposed. GORL employs a guiding network, along with only a few expert demonstrations, to adaptively determine the relative importance of the policy improvement and policy constraint for every sample. We theoretically prove that the guidance provided by our method is rational and near-optimal. Extensive experiments on various environments suggest that GORL can be easily installed on most offline RL algorithms with statistically significant performance improvements.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"35 11","pages":"16288-16300"},"PeriodicalIF":10.2,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inherently Interpretable Physics-Informed Neural Network for Battery Modeling and Prognosis. 用于电池建模和预测的内在可解释物理知情神经网络。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-07 DOI: 10.1109/TNNLS.2023.3329368
Fujin Wang, Quanquan Zhi, Zhibin Zhao, Zhi Zhai, Yingkai Liu, Huan Xi, Shibin Wang, Xuefeng Chen
{"title":"Inherently Interpretable Physics-Informed Neural Network for Battery Modeling and Prognosis.","authors":"Fujin Wang, Quanquan Zhi, Zhibin Zhao, Zhi Zhai, Yingkai Liu, Huan Xi, Shibin Wang, Xuefeng Chen","doi":"10.1109/TNNLS.2023.3329368","DOIUrl":"10.1109/TNNLS.2023.3329368","url":null,"abstract":"<p><p>Lithium-ion batteries are widely used in modern society. Accurate modeling and prognosis are fundamental to achieving reliable operation of lithium-ion batteries. Accurately predicting the end-of-discharge (EOD) is critical for operations and decision-making when they are deployed to critical missions. Existing data-driven methods have large model parameters, which require a large amount of labeled data and the models are not interpretable. Model-based methods need to know many parameters related to battery design, and the models are difficult to solve. To bridge these gaps, this study proposes a physics-informed neural network (PINN), called battery neural network (BattNN), for battery modeling and prognosis. Specifically, we propose to design the structure of BattNN based on the equivalent circuit model (ECM). Therefore, the entire BattNN is completely constrained by physics. Its forward propagation process follows the physical laws, and the model is inherently interpretable. To validate the proposed method, we conduct the discharge experiments under random loading profiles and develop our dataset. Analysis and experiments show that the proposed BattNN only needs approximately 30 samples for training, and the average required training time is 21.5 s. Experimental results on three datasets show that our method can achieve high prediction accuracy with only a few learnable parameters. Compared with other neural networks, the prediction MAEs of our BattNN are reduced by 77.1%, 67.4%, and 75.0% on three datasets, respectively. Our data and code will be available at: https://github.com/wang-fujin/BattNN.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAMCL: Subgraph-Aligned Multiview Contrastive Learning for Graph Anomaly Detection. SAMCL:用于图异常检测的子图对齐多视图对比学习。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-07 DOI: 10.1109/TNNLS.2023.3323274
Jingtao Hu, Bin Xiao, Hu Jin, Jingcan Duan, Siwei Wang, Zhao Lv, Siqi Wang, Xinwang Liu, En Zhu
{"title":"SAMCL: Subgraph-Aligned Multiview Contrastive Learning for Graph Anomaly Detection.","authors":"Jingtao Hu, Bin Xiao, Hu Jin, Jingcan Duan, Siwei Wang, Zhao Lv, Siqi Wang, Xinwang Liu, En Zhu","doi":"10.1109/TNNLS.2023.3323274","DOIUrl":"10.1109/TNNLS.2023.3323274","url":null,"abstract":"<p><p>Graph anomaly detection (GAD) has gained increasing attention in various attribute graph applications, i.e., social communication and financial fraud transaction networks. Recently, graph contrastive learning (GCL)-based methods have been widely adopted as the mainstream for GAD with remarkable success. However, existing GCL strategies in GAD mainly focus on node-node and node-subgraph contrast and fail to explore subgraph-subgraph level comparison. Furthermore, the different sizes or component node indices of the sampled subgraph pairs may cause the \"nonaligned\" issue, making it difficult to accurately measure the similarity of subgraph pairs. In this article, we propose a novel subgraph-aligned multiview contrastive approach for graph anomaly detection, named SAMCL, which fills the subgraph-subgraph contrastive-level blank for GAD tasks. Specifically, we first generate the multiview augmented subgraphs by capturing different neighbors of target nodes forming contrasting subgraph pairs. Then, to fulfill the nonaligned subgraph pair contrast, we propose a subgraph-aligned strategy that estimates similarities with the Earth mover's distance (EMD) of both considering the node embedding distributions and typology awareness. With the newly established similarity measure for subgraphs, we conduct the interview subgraph-aligned contrastive learning module to better detect changes for nodes with different local subgraphs. Moreover, we conduct intraview node-subgraph contrastive learning to supplement richer information on abnormalities. Finally, we also employ the node reconstruction task for the masked subgraph to measure the local change of the target node. Finally, the anomaly score for each node is jointly calculated by these three modules. Extensive experiments conducted on benchmark datasets verify the effectiveness of our approach compared to existing state-of-the-art (SOTA) methods with significant performance gains (up to 6.36% improvement on ACM). Our code can be verified at https://github.com/hujingtao/SAMCL.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Domain Adaptation on Person Reidentification via Dual-Level Asymmetric Mutual Learning. 基于双层非对称相互学习的无监督领域自适应的人再识别。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-07 DOI: 10.1109/TNNLS.2023.3326477
Qiong Wu, Jiahan Li, Pingyang Dai, Qixiang Ye, Liujuan Cao, Yongjian Wu, Rongrong Ji
{"title":"Unsupervised Domain Adaptation on Person Reidentification via Dual-Level Asymmetric Mutual Learning.","authors":"Qiong Wu, Jiahan Li, Pingyang Dai, Qixiang Ye, Liujuan Cao, Yongjian Wu, Rongrong Ji","doi":"10.1109/TNNLS.2023.3326477","DOIUrl":"10.1109/TNNLS.2023.3326477","url":null,"abstract":"<p><p>Unsupervised domain adaptation (UDA) person reidentification (Re-ID) aims to identify pedestrian images within an unlabeled target domain with an auxiliary labeled source-domain dataset. Many existing works attempt to recover reliable identity information by considering multiple homogeneous networks. And take these generated labels to train the model in the target domain. However, these homogeneous networks identify people in approximate subspaces and equally exchange their knowledge with others or their mean net to improve their ability, inevitably limiting the scope of available knowledge and putting them into the same mistake. This article proposes a dual-level asymmetric mutual learning (DAML) method to learn discriminative representations from a broader knowledge scope with diverse embedding spaces. Specifically, two heterogeneous networks mutually learn knowledge from asymmetric subspaces through the pseudo label generation in a hard distillation manner. The knowledge transfer between two networks is based on an asymmetric mutual learning (AML) manner. The teacher network learns to identify both the target and source domain while adapting to the target domain distribution based on the knowledge of the student. Meanwhile, the student network is trained on the target dataset and employs the ground-truth label through the knowledge of the teacher. Extensive experiments in Market-1501, CUHK-SYSU, and MSMT17 public datasets verified the superiority of DAML over state-of-the-arts (SOTA).</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR-Transformer: Multiresolution Transformer for Multivariate Time Series Prediction. MR变换器:用于多变量时间序列预测的多分辨率变换器。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-06 DOI: 10.1109/TNNLS.2023.3327416
Siying Zhu, Jiawei Zheng, Qianli Ma
{"title":"MR-Transformer: Multiresolution Transformer for Multivariate Time Series Prediction.","authors":"Siying Zhu, Jiawei Zheng, Qianli Ma","doi":"10.1109/TNNLS.2023.3327416","DOIUrl":"10.1109/TNNLS.2023.3327416","url":null,"abstract":"<p><p>Multivariate time series (MTS) prediction has been studied broadly, which is widely applied in real-world applications. Recently, transformer-based methods have shown the potential in this task for their strong sequence modeling ability. Despite progress, these methods pay little attention to extracting short-term information in the context, while short-term patterns play an essential role in reflecting local temporal dynamics. Moreover, we argue that there are both consistent and specific characteristics among multiple variables, which should be fully considered for MTS modeling. To this end, we propose a multiresolution transformer (MR-Transformer) for MTS prediction, modeling MTS from both the temporal and the variable resolution. Specifically, for the temporal resolution, we design a long short-term transformer. We first split the sequence into nonoverlapping segments in an adaptive way and then extract short-term patterns within segments, while long-term patterns are captured by the inherent attention mechanism. Both of them are aggregated together to capture the temporal dependencies. For the variable resolution, besides the variable-consistent features learned by long short-term transformer, we also design a temporal convolution module to capture the specific features of each variable individually. MR-Transformer enhances the MTS modeling ability by combining multiresolution features between both time steps and variables. Extensive experiments conducted on real-world time series datasets show that MR-Transformer significantly outperforms the state-of-the-art MTS prediction models. The visualization analysis also demonstrates the effectiveness of the proposed model.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource-Constrained Multisource Instance-Based Transfer Learning. 基于资源约束的多源实例迁移学习。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-06 DOI: 10.1109/TNNLS.2023.3327248
Mohammad Askarizadeh, Alireza Morsali, Kim Khoa Nguyen
{"title":"Resource-Constrained Multisource Instance-Based Transfer Learning.","authors":"Mohammad Askarizadeh, Alireza Morsali, Kim Khoa Nguyen","doi":"10.1109/TNNLS.2023.3327248","DOIUrl":"10.1109/TNNLS.2023.3327248","url":null,"abstract":"<p><p>In today's machine learning (ML), the need for vast amounts of training data has become a significant challenge. Transfer learning (TL) offers a promising solution by leveraging knowledge across different domains/tasks, effectively addressing data scarcity. However, TL encounters computational and communication challenges in resource-constrained scenarios, and negative transfer (NT) can arise from specific data distributions. This article presents a novel focus on maximizing the accuracy of instance-based TL in multisource resource-constrained environments while mitigating NT, a key concern in TL. Previous studies have overlooked the impact of resource consumption in addressing the NT problem. To address these challenges, we introduce an optimization model named multisource resource-constrained optimized TL (MSOPTL), which employs a convex combination of empirical sources and target errors while considering feasibility and resource constraints. Moreover, we enhance one of the generalization error upper bounds in domain adaptation setting by demonstrating the potential to substitute the H ∆ H divergence with the Kullback-Leibler (KL) divergence. We utilize this enhanced error upper bound as one of the feasibility constraints of MSOPTL. Our suggested model can be applied as a versatile framework for various ML methods. Our approach is extensively validated in a neural network (NN)-based classification problem, demonstrating the efficiency of MSOPTL in achieving the desired trade-offs between TL's benefits and associated costs. This advancement holds tremendous potential for enhancing edge artificial intelligence (AI) applications in resource-constrained environments.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intralayer Synchronization and Interlayer Quasisynchronization in Multiplex Networks of Nonidentical Layers. 非同层复用网络中的层内同步和层间准同步。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-11-06 DOI: 10.1109/TNNLS.2023.3326629
Yujuan Han, Wenlian Lu, Tianping Chen
{"title":"Intralayer Synchronization and Interlayer Quasisynchronization in Multiplex Networks of Nonidentical Layers.","authors":"Yujuan Han, Wenlian Lu, Tianping Chen","doi":"10.1109/TNNLS.2023.3326629","DOIUrl":"10.1109/TNNLS.2023.3326629","url":null,"abstract":"<p><p>In this article, we discuss synchronization in multiplex networks of different layers. Both the topologies and the uncoupled node dynamics in different layers are different. Novel sufficient criteria are derived for intralayer synchronization and interlayer quasisynchronization, in terms of the coupling matrices, the coupling strengths, and the intrinsic function of the uncoupled systems. We also investigate interlayer synchronization of multiplex networks with identical uncoupled node dynamics. Finally, we give some numerical examples to validate the effectiveness of these theoretical results.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71481056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信