{"title":"Prescribed-Time Optimal Consensus for Switched Stochastic Multiagent Systems: Reinforcement Learning Strategy","authors":"Weiwei Guang;Xin Wang;Lihua Tan;Jian Sun;Tingwen Huang","doi":"10.1109/TETCI.2024.3451334","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451334","url":null,"abstract":"This paper focuses on the event-triggered-based prescribed-time optimal consensus control issue for switched stochastic nonlinear multi–agent systems under switching topologies. Notably, the system stability may be affected owing to the change in information transmission channels between agents. To surmount this obstacle, this paper presents a reconstruction mechanism to rebuild the consensus error at the switching topology instant. Combining optimal control theory and reinforcement learning strategy, the identifier neural network is utilized to approximate the unknown function, with its corresponding updating law being independent of the switching duration of system dynamics. In addition, an event-triggered mechanism is adopted to enhance the efficiency of resource utilization. With the assistance of the Lyapunov stability principle, sufficient conditions are established to ensure that all signals in the closed-loop system are cooperatively semi-globally uniformly ultimately bounded in probability and the consensus error is capable of converging to the specified interval in a prescribed time. At last, a simulation example is carried out to validate the feasibility of the presented control scheme.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"75-86"},"PeriodicalIF":5.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Micro Many-Objective Evolutionary Algorithm With Knowledge Transfer","authors":"Hu Peng;Zhongtian Luo;Tian Fang;Qingfu Zhang","doi":"10.1109/TETCI.2024.3451309","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451309","url":null,"abstract":"Computational effectiveness and limited resources in evolutionary algorithms are interdependently handled during the working of low-power microprocessors for real-world problems, particularly in many-objective evolutionary algorithms (MaOEAs). In this respect, the balance between them will be broken by evolutionary algorithms with a normal-sized population, but which doesn't include a micro population. To tackle this issue, this paper proposes a micro many-objective evolutionary algorithm with knowledge transfer (<inline-formula><tex-math>$mu$</tex-math></inline-formula>MaOEA). To address the oversight that knowledge is often not considered enough between niches, the knowledge-transfer strategy is proposed to bolster each unoptimized niche through optimizing adjacent niches, which enables niches to generate better individuals. Meanwhile, a two-stage mechanism based on fuzzy logic is designed to settle the conflict between convergence and diversity in many-objective optimization problems. Through efficient fuzzy logic decision-making, the mechanism maintains different properties of the population at different stages. Different MaOEAs and micro multi-objective evolutionary algorithms were compared on benchmark test problems DTLZ, MaF, and WFG, and the results showed that <inline-formula><tex-math>$mu$</tex-math></inline-formula>MaOEA has an excellent performance. In addition, it also conducted simulation on two real-world problems, MPDMP and MLDMP, based on a low-power microprocessor. The results indicated the applicability of <inline-formula><tex-math>$mu$</tex-math></inline-formula>MaOEA for low-power microprocessor optimization.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"43-56"},"PeriodicalIF":5.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashutosh Kulkarni;Shruti S. Phutke;Santosh Kumar Vipparthi;Subrahmanyam Murala
{"title":"Multi-Medium Image Enhancement With Attentive Deformable Transformers","authors":"Ashutosh Kulkarni;Shruti S. Phutke;Santosh Kumar Vipparthi;Subrahmanyam Murala","doi":"10.1109/TETCI.2024.3451550","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451550","url":null,"abstract":"Visibility challenges such as atmospheric haze, water turbidity, etc. are imposed while capturing images in various mediums like aerial, outdoor, underwater, etc. Such reduction in visibility affects the functioning of high-level computer vision applications like object detection, semantic segmentation, military surveillance, earthquake assessment, etc. The existing methods either rely on incorporating additional prior information during the training process or yield less than optimal results when analysed on images with varying levels of degradation, reason being the absence of both local and global dependencies within the extracted features. This paper presents a generalized transformer based architecture for aerial, outdoor and underwater image enhancement. We propose a novel space aware deformable convolution based multi-head self attention containing spatially attentive offset extraction. Here, the deformable multi-head attention is introduced to reconstruct fine level texture in the restored image. Additionally, we introduce a spatially attentive offset extractor within the deformable convolution to prioritize relevant contextual information. Further, we propose an edge enhancing feature fusion block for restoring the edge details in the image along with learning enriched features from multi-stream information. Finally, we propose a global context aware channel attentive feature propagator having a dual functionality of global information extraction and provision of channel attention. Comprehensive experimentation conducted on both synthetic and real-world datasets, along with thorough ablation study, showcases that the proposed approach performs optimally when compared with the existing methods on aerial, outdoor, and underwater image enhancement.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1659-1672"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Zhang;Chaofan Qin;Haipeng Yang;Zishan Xiong;Renzhi Cao;Fan Cheng
{"title":"A Diversified Population Migration-Based Multiobjective Evolutionary Algorithm for Dynamic Community Detection","authors":"Lei Zhang;Chaofan Qin;Haipeng Yang;Zishan Xiong;Renzhi Cao;Fan Cheng","doi":"10.1109/TETCI.2024.3451566","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451566","url":null,"abstract":"Dynamic community detection, which is capable of revealing changes in community structure over time, has garnered increasing attention in research. While evolutionary clustering methods have proven to be effective in tackling this issue, they often have a tendency to favor what are referred to as elite solutions, inadvertently neglecting the potential value of non-elite alternatives. Although elite solutions can ensure population convergence, they may result in negative population migration due to the lack of diversity when the network changes. In contrast, when the network undergoes changes, non-elite solutions could better adapt to the changed network, thereby can help the algorithm find accurate community structures in the new environment. To this end, we propose a diversified population migration strategy that consists of two-stages, i.e., solution selection and solution migration. In the first stage, we use elite solutions not only to ensure convergence but also non-elite solutions to maintain diversity and cope with network changes. In the second stage, the migration solutions are refined by using incremental changes between the two consecutive snapshots of networks. Based on the proposed strategy, we suggest a diversified population migration-based multiobjective evolutionary algorithm named DPMOEA. In DPMOEA, we design new genetic operators that utilize incremental changes between networks to make the population evolve in the right direction. Our experimental results demonstrate that the proposed method outperforms state-of-the-art baseline algorithms and can effectively solve the dynamic community detection problem.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"145-159"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143107213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolutionary Sequential Transfer Learning for Multi-Objective Feature Selection in Classification","authors":"Jiabin Lin;Qi Chen;Bing Xue;Mengjie Zhang","doi":"10.1109/TETCI.2024.3451709","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451709","url":null,"abstract":"Over the past decades, evolutionary multi-objective algorithms have proven their efficacy in feature selection. Nevertheless, a prevalent approach involves addressing feature selection tasks in isolation, even when these tasks share common knowledge and interdependencies. In response to this, the emerging field of evolutionary sequential transfer learning is gaining attention for feature selection. This novel approach aims to transfer and leverage knowledge gleaned by evolutionary algorithms in a source domain, applying it intelligently to enhance feature selection outcomes in a target domain. Despite its promising potential to exploit shared insights, the adoption of this transfer learning paradigm for feature selection remains surprisingly limited due to the computational expense of existing methods, which learn a mapping between the source and target search spaces. This paper introduces an advanced multi-objective feature selection approach grounded in evolutionary sequential transfer learning, strategically crafted to tackle interconnected feature selection tasks with overlapping features. Our novel framework integrates probabilistic models to capture high-order information within feature selection solutions, successfully tackling the challenges of extracting and preserving knowledge from the source domain without an expensive cost. It also provides a better way to transfer the source knowledge when the feature spaces of the source and target domains diverge. We evaluate our proposed method against four prominent single-task feature selection approaches and a cutting-edge evolutionary transfer learning feature selection method. Through empirical evaluation, our proposed approach showcases superior performance across the majority of datasets, surpassing the effectiveness of the compared methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"1019-1033"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Agent Evolutionary Reinforcement Learning Based on Cooperative Games","authors":"Jin Yu;Ya Zhang;Changyin Sun","doi":"10.1109/TETCI.2024.3452119","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3452119","url":null,"abstract":"Despite the significant advancements in single-agent evolutionary reinforcement learning, research exploring evolutionary reinforcement learning within multi-agent systems is still in its nascent stage. The integration of evolutionary algorithms (EA) and reinforcement learning (RL) has partially mitigated RL's reliance on the environment and provided it with an ample supply of data. Nonetheless, existing studies primarily focus on the indirect collaboration between RL and EA, which lacks sufficient exploration on the effective balance of individual and team rewards. To address this problem, this study introduces game theory to establish a dynamic cooperation framework between EA and RL, and proposes a multi-agent evolutionary reinforcement learning based on cooperative games. This framework facilitates more efficient direct collaboration between RL and EA, enhancing individual rewards while ensuring the attainment of team objectives. Initially, a cooperative policy is formed through a joint network to simplify the parameters of each agent to speed up the overall training process. Subsequently, RL and EA engage in cooperative games to determine whether RL jointly optimizes the same policy based on Pareto optimal results. Lastly, through double objectives optimization, a balance between the two types of rewards is achieved, with EA focusing on team rewards and RL focusing on individual rewards. Experimental results demonstrate that the proposed algorithm outperforms its single-algorithm counterparts in terms of competitiveness.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1650-1658"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingchao Wang;Guoheng Huang;Guo Zhong;Xiaochen Yuan;Chi-Man Pun;Jinxun Wang;Jianqi Liu
{"title":"A Novel Hypercomplex Graph Convolution Refining Mechanism","authors":"Jingchao Wang;Guoheng Huang;Guo Zhong;Xiaochen Yuan;Chi-Man Pun;Jinxun Wang;Jianqi Liu","doi":"10.1109/TETCI.2024.3449877","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3449877","url":null,"abstract":"Hypercomplex graph convolutions with higher hypercomplex dimensions can extract more complex features in graphs and features with varying levels of complexity are suited for different situation. However, existing hypercomplex graph neural networks have a constraint that they can only carry out hypercomplex graph convolutions in a predetermined and unchangeable dimension. To address this limitation, this paper presents a solution to overcome this limitation by introducing the FFT-based Adaptive Fourier hypercomplex graph convolution filtering mechanism (FAF mechanism), which can adaptively select hypercomplex graph convolutions with the most appropriate dimensions for different situations by projecting the outputs from all candidate hypercomplex graph convolutions to the frequency domain and selecting the one with the highest energy via the FFT-based Adaptive Fourier Decomposition. Meanwhile, we apply the FAF mechanism to our proposed hypercomplex high-order interaction graph neural network (HHG-Net), which performs high-order interaction and strengthens interaction features through quantum graph hierarchical attention module and feature interaction gated graph convolution. During convolution filtering, the FAF mechanism projects the outputs from different candidate hypercomplex graph convolutions to the frequency domain, extracts their energy, and selects the convolution that outputs the largest energy. After that, the model with selected hypercomplex graph convolutions is trained again. Our method outperforms many benchmarks, including the model with hypercomplex graph convolutions selected by DARTS, in node classification, graph classification, and text classification. This showcases the versatility of our approach, which can be effectively applied to both graph and text data.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1673-1687"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hyper-Laplacian Regularized Concept Factorization in Low-Rank Tensor Space for Multi-View Clustering","authors":"Zixiao Yu;Lele Fu;Yongyong Chen;Zhiling Cai;Guoqing Chao","doi":"10.1109/TETCI.2024.3449920","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3449920","url":null,"abstract":"Tensor-oriented multi-view subspace clustering has achieved significant strides in assessing high-order correlations of multi-view data. Nevertheless, most of existing investigations are typically hampered by the two flaws: (1) Self-representation based tensor subspace learning usually induces high time and space complexity, and is limited in perceiving nonlinear local structure in the embedding space. (2) The tensor singular value decomposition model redistributes each singular value equally without considering the diverse importance among them. To well cope with the above issues, we propose a hyper-Laplacian regularized concept factorization (HLRCF) in low-rank tensor space for multi-view clustering. Specifically, HLRCF adopts the concept factorization to explore the latent cluster-wise representation of each view. Further, the hypergraph Laplacian regularization endows the model with the capability of extracting the nonlinear local structures in the latent space. Considering that different tensor singular values associate structural information with unequal importance, we develop a self-weighted tensor Schatten <inline-formula><tex-math>$p$</tex-math></inline-formula>-norm to constrain the tensor comprised of all cluster-wise representations. Notably, the tensor with smaller size greatly decreases the time and space complexity in the low-rank optimization. Finally, experimental results on eight benchmark datasets exhibit that HLRCF outperforms other multi-view methods, showing its superior performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1728-1742"},"PeriodicalIF":5.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qi Liu;Jie Wei;Thomas Penzel;Maarten De Vos;Yuan Zhang;Zhiyi Huang;Mikhail Poluektov;Yulan Zhu;Chenyu Li
{"title":"ActiveSleepLearner: Less Annotation Budget for Better Large-Scale Sleep Staging","authors":"Qi Liu;Jie Wei;Thomas Penzel;Maarten De Vos;Yuan Zhang;Zhiyi Huang;Mikhail Poluektov;Yulan Zhu;Chenyu Li","doi":"10.1109/TETCI.2024.3446389","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3446389","url":null,"abstract":"Sleep staging traditionally requires massive time and expertise from clinicians. Various automated sleep staging methods have been developed to streamline this task, however, they commonly need extensive labeled clinical sleep data. ActiveSleepLearner is proposed to address this challenge, a transfer learning framework that leverages active learning techniques to achieve weakly supervised sleep staging across various settings, devices, and populations. On one hand, a selection algorithm that identifies the most informative samples for manual labeling is introduced, considering factors like sample representativeness, diversity, and complexity. On the other hand, during the fine-tuning phase with limited samples, model training efficiency is improved by introducing a novel joint loss function, implementing well-designed data augmentation techniques, and adopting layer-wise and multi-step learning rate strategies. Our approach reduces the manual annotation budget required from clinicians, enabling them to label only a small portion of the sizeable unlabeled dataset while the system handles the rest. Across the four datasets evaluated, our method consistently outperforms baselines and other partial-label methods. Especially, even with 5% labels, ActiveSleepLearner reaches 97.5% of the accuracy achieved by the model explicitly trained on this dataset. More importantly, ActiveSleepLearner fosters collaboration between humans and machines, reducing the clinician's burden by requiring them to label only a part of epochs instead of all recordings.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1756-1765"},"PeriodicalIF":5.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SMEM: A Subspace Merging Based Evolutionary Method for High-Dimensional Feature Selection","authors":"Kaixuan Li;Shibo Jiang;Rui Zhang;Jianfeng Qiu;Lei Zhang;Lixia Yang;Fan Cheng","doi":"10.1109/TETCI.2024.3451695","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3451695","url":null,"abstract":"In the past decade, evolutionary algorithms (EAs) have shown their promising performance in solving the problem of feature selection. Despite that, it is still quite challenging to design the EAs for high-dimensional feature selection (HDFS), since the increasing number of features causes the search space of EAs grows exponentially, which is known as the “curse of dimensionality”. To tackle the issue, in this paper, a <bold>S</b>ubspace <bold>M</b>erging based <bold>E</b>volutionary <bold>M</b>ethod, termed SMEM is suggested. In SMEM, to avoid directly optimizing the large search space of HDFS, the original feature space of HDFS is firstly divided into several independent low-dimensional subspaces. In each subspace, a subpopulation is evolved to obtain the latent good feature subsets quickly. Then, to avoid some features being missed, these low-dimensional subspaces merge in pairs, and the further search is carried on the merged subspaces. During the evolving of each merged subspace, the good feature subsets obtained from previous subspace pair are fully utilized. The above subspace merging procedure repeats, and the performance of SMEM is improved gradually, until in the end, all the subspaces are merged into one final space. At that time, the final space is also the original feature space in HDFS, which ensures all the features in the data is considered. Experimental results on different high-dimensional datasets demonstrate the effectiveness and the efficiency of the proposed SMEM, when compared with the state-of-the-arts.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1712-1727"},"PeriodicalIF":5.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}