Yue Zhang;Weitian Huang;Xiaoxue Zhang;Sirui Yang;Fa Zhang;Xin Gao;Hongmin Cai
{"title":"Learning Uniform Latent Representation via Alternating Adversarial Network for Multi-View Clustering","authors":"Yue Zhang;Weitian Huang;Xiaoxue Zhang;Sirui Yang;Fa Zhang;Xin Gao;Hongmin Cai","doi":"10.1109/TETCI.2025.3540426","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540426","url":null,"abstract":"Multi-view clustering aims at exploiting complementary information contained in different views to partition samples into distinct categories. The popular approaches either directly integrate features from different views, or capture the common portion between views without closing the heterogeneity gap. Such rigid schemes did not consider the possible mis-alignment among different views, thus failing to learn a consistent yet comprehensive representation, leading to inferior clustering performance. To tackle the drawback, we introduce an alternating adversarial learning strategy to drive different views to fall into the same semantic space. We first present a Linear Alternating Adversarial Multi-view Clustering (Linear-A<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>MC) model to align views in linear embedding spaces. To enjoy the power of feature extraction capability of deep networks, we further build a Deep Alternating Adversarial Multi-view Clustering (Deep-A<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>MC) network to realize non-linear transformations and feature pruning among different views, simultaneously. Specifically, Deep-A<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>MC leverages alternate adversarial learning to first align low-dimensional embedding distributions, followed by a mixture of latent representations synthesized through attention learning for multiple views. Finally, a self-supervised clustering loss is jointly optimized in the unified network to guide the learning of discriminative representations to yield compact clusters. Extensive experiments on six real world datasets with largely varied sample sizes demonstrate that Deep-A<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>MC achieved superior clustering performance by comparing with twelve baseline methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2244-2255"},"PeriodicalIF":5.3,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Feature Transfer for Light Field Super-Resolution With Hybrid Lenses","authors":"Gaosheng Liu;Huanjing Yue;Xin Luo;Jingyu Yang","doi":"10.1109/TETCI.2025.3542130","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3542130","url":null,"abstract":"Reconstructing high-resolution (HR) light field (LF) images has shown considerable potential using hybrid lenses—a configuration comprising a central HR sensor and multiple side low-resolution (LR) sensors. Existing methods for super-resolving hybrid lenses LF images typically rely on patch matching or cross-resolution fusion with disparity-based rendering to leverage the high spatial sampling rate of the central view. However, the disparity-resolution gap between the HR central view and the LR side views poses a challenge for local high-frequency transfer. To address this, we introduce a novel framework with an adaptive feature transfer strategy. Specifically, we propose dynamically sampling and aggregating pixels from the HR central feature to effectively transfer high-frequency information to each LR view. The proposed strategy naturally adapts to different disparities and image structures, facilitating information propagation. Additionally, to refine the intermediate LF feature and promote angular consistency, we introduce a spatial-angular cross attention block that enhances domain-specific feature by appropriate weights generated from cross-domain feature. Extensive experimental results demonstrate the superiority of our proposed method over state-of-the-art approaches on both simulated and real-world datasets. The performance gain has significant potential to facilitate the down-stream LF-based applications.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2284-2295"},"PeriodicalIF":5.3,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yufeng Feng;Weiguo Sheng;Zidong Wang;Gang Xiao;Qi Li;Li Li;Zuling Wang
{"title":"Memetic Differential Evolution With Adaptive Niching Selection and Diversity-Driven Strategies for Multimodal Optimization","authors":"Yufeng Feng;Weiguo Sheng;Zidong Wang;Gang Xiao;Qi Li;Li Li;Zuling Wang","doi":"10.1109/TETCI.2025.3529903","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3529903","url":null,"abstract":"Simultaneously identifying a set of optimal solutions within the landscape of multimodal optimization problem presents a significant challenge. In this work, a differential evolution algorithm with adaptive niching selection, diversity-driven exploration and adaptive local search strategies is proposed to tackle the challenge. In the proposed method, an adaptive niching selection strategy is devised to dynamically select appropriate niching methods from a diverse pool to evolve the population. The pool encompasses niching methods with varying search properties and is dynamically updated during evolution. Further, to enhance exploration, a diversity-driven exploration strategy, which leverages redundant individuals from convergence regions to explore the solution space, is introduced. Additionally, an adaptive local search operation, in which the probability of applying local search and corresponding sampling area are dynamically determined based on the potential of solutions as well as the stage of evolution, is developed to fine-tune promising solutions. The effectiveness of proposed method has been demonstrated on 20 test functions from CEC2013 benchmark suite. Experimental results confirm the effectiveness of our method, demonstrating its superiority compared to related algorithms.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1322-1339"},"PeriodicalIF":5.3,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Te Ye;Zizhen Zhang;Qingfu Zhang;Jinbiao Chen;Jiahai Wang
{"title":"Solving Multiobjective Combinatorial Optimization via Learning to Improve Method","authors":"Te Ye;Zizhen Zhang;Qingfu Zhang;Jinbiao Chen;Jiahai Wang","doi":"10.1109/TETCI.2025.3540424","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540424","url":null,"abstract":"Recently, neural combinatorial optimization (NCO) methods have been prevailing for solving multiobjective combinatorial optimization problems (MOCOPs). Most NCO methods are based on the “Learning to Construct” (L2C) paradigm, where the trained model(s) can directly generate a set of approximate Pareto optimal solutions. However, these methods still suffer from insufficient proximity and poor diversity towards the true Pareto front. In this paper, following the “Learning to Improve” (L2I) paradigm, we propose weight-related policy network (WRPN), a learning-based improvement method for solving MOCOPs. WRPN is incorporated into multiobjective evolutionary algorithm (MOEA) frameworks to effectively guide the search direction. A shared baseline for proximal policy optimization is presented to reduce variance in model training. A quality enhancement mechanism is designed to further refine the Pareto set during model inference. Computational experiments conducted on two classic MOCOPs, i.e., multiobjective traveling salesman problem and multiobjective vehicle routing problem, indicate that our method achieves remarkable results. Notably, our WRPN module can be easily integrated into various MOEA frameworks such as NSGA-II, MOEA/D and MOGLS, providing versatility and applicability across different problem domains.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2122-2136"},"PeriodicalIF":5.3,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MTMD: Multi-Scale Temporal Memory Learning and Efficient Debiasing Framework for Stock Trend Forecasting","authors":"Mingjie Wang;Juanxi Tian;Mingze Zhang;Jianxiong Guo;Weijia Jia","doi":"10.1109/TETCI.2025.3542107","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3542107","url":null,"abstract":"The endeavor of stock trend forecasting is principally focused on predicting the future trajectory of the stock market, utilizing either manual or technical methodologies to optimize profitability. Recent advancements in machine learning technologies have showcased their efficacy in discerning authentic profit signals within the realm of stock trend forecasting, predominantly employing temporal data derived from historical stock price patterns. Nevertheless, the inherently volatile and dynamic characteristics of the stock market render the learning and capture of multi-scale temporal dependencies and stable trading opportunities a formidable challenge. This predicament is primarily attributed to the difficulty in distinguishing real profit signal patterns amidst a plethora of mixed, noisy data. In response to these complexities, we propose a Multi-Scale Temporal Memory Learning and Efficient Debiasing (MTMD) model. This innovative approach encompasses the creation of a learnable embedding coupled with external attention, serving as a memory module through self-similarity. It aims to mitigate noise interference and bolster temporal consistency within the model. The MTMD model adeptly amalgamates comprehensive local data at each timestamp while concurrently focusing on salient historical patterns on a global scale. Furthermore, the incorporation of a graph network, tailored to assimilate global and local information, facilitates the adaptive fusion of heterogeneous multi-scale data. Rigorous ablation studies and experimental evaluations affirm that the MTMD model surpasses contemporary state-of-the-art methodologies by a substantial margin in benchmark datasets.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2151-2163"},"PeriodicalIF":5.3,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Broad Graph Attention Network With Multiple Kernel Mechanism","authors":"Qingwang Wang;Pengcheng Jin;Hao Xiong;Yuhang Wu;Xu Lin;Tao Shen;Jiangbo Huang;Jun Cheng;Yanfeng Gu","doi":"10.1109/TETCI.2025.3542127","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3542127","url":null,"abstract":"Graph neural networks (GNNs) are highly effective models for tasks involving non-Euclidean data. To improve their performance, researchers have explored strategies to increase the depth of GNN structures, as in the case of convolutional neural network (CNN)-based deep networks. However, GNNs relying on information aggregation mechanisms typically face limitations in achieving superior representation performance because of deep feature oversmoothing. Inspired by the broad learning system, in this study, we attempt to avoid the feature oversmoothing issue by expanding the width of GNNs. We propose a broad graph attention network framework with a multikernel mechanism (BGAT-MK). In particular, we propose the construction of a broad GNN using multikernel mapping to generate several reproducing kernel Hilbert spaces (RKHSs), where nodes can wander through different kernel spaces and generate representations. Furthermore, we construct a broader network by aggregating representations in different RKHSs and fusing adaptive weights to aggregate the original and enhanced mapped representations. The efficacy of BGAT-MK is validated through experiments on conventional node classification and light detection and ranging point cloud semantic segmentation tasks, demonstrating its superior performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2296-2307"},"PeriodicalIF":5.3,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Objective Integrated Energy-Efficient Scheduling of Distributed Flexible Job Shop and Vehicle Routing by Knowledge-and-Learning-Based Hyper-Heuristics","authors":"YaPing Fu;ZhengPei Zhang;Min Huang;XiWang Guo;Liang Qi","doi":"10.1109/TETCI.2025.3540422","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540422","url":null,"abstract":"Currently, supply chain operations face enormous challenges due to complex manufacturing processes and distribution activities. This work proposes a multi-objective integrated energy-efficient scheduling and routing method for a distributed flexible job shop with multiple vehicles to minimize job completion time, total energy consumption, and workload of factories. Firstly, a mixed integer programming model is formulized. Secondly, a knowledge-and-learning-based hyper-heuristic algorithm is developed to solve the model. It innovatively incorporates a Q-learning method to choose a search method from a pool containing genetic algorithm, artificial bee colony optimizer, brain storm optimizer and Jaya algorithm. Furthermore, it embeds problem-specific knowledge into the devised method, aiming to further refine obtained solutions. Finally, the formulated model and proposed algorithm's performance are verified by exact solver CPLEX. The algorithm is further compared with three state-of-the-art optimization approaches. The results confirm its superiority over them in solving the studied problem.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2137-2150"},"PeriodicalIF":5.3,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Binary Classification From $M$-Tuple Similarity-Confidence Data","authors":"Junpeng Li;Jiahe Qin;Changchun Hua;Yana Yang","doi":"10.1109/TETCI.2025.3537938","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3537938","url":null,"abstract":"A recent advancement in weakly-supervised learning utilizes pairwise similarity-confidence (Sconf) data, allowing the training of binary classifiers using unlabeled data pairs with confidence scores indicating similarity. However, extending this approach to handle high-order tuple data (e.g., triplets, quadruplets, quintuplets) with similarity-confidence scores presents significant challenges. To address these issues, this paper introduces <italic>M-tuple similarity-confidence (Msconf) learning</i>, a novel framework that extends <italic>Sconf learning</i> to <inline-formula><tex-math>$M$</tex-math></inline-formula>-tuples of varying sizes. The proposed method includes a detailed process for generating <inline-formula><tex-math>$M$</tex-math></inline-formula>-tuple similarity-confidence data and deriving an unbiased risk estimator to train classifiers effectively. Additionally, risk correction models are implemented to reduce potential overfitting, and a theoretical generalization bound is established. Extensive experiments demonstrate the practical effectiveness and robustness of the proposed <italic>Msconf learning</i> framework.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1418-1427"},"PeriodicalIF":5.3,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimized Leader-Follower Consensus Control of Multi-QUAV Attitude System Using Reinforcement Learning and Backstepping","authors":"Guoxing Wen;Yanfen Song;Zijun Li;Bin Li","doi":"10.1109/TETCI.2025.3537943","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3537943","url":null,"abstract":"This work is to explore the optimized leader-follower attitude consensus scheme for the multi-quadrotor unmanned aerial vehicle (QUAV) system. Since the QUAV attitude dynamic is modeled by a second-order nonlinear differential equation, the optimized backstepping (OB) technique can be competent for this control design. To derive the optimized leader-follower attitude consensus control, the critic-actor reinforcement learning (RL) is performed in the final backstepping step. Different with the attitude control of single QUAV, the case of multi-QUAV is composed of multiple intercommunicated QUAV attitude individuals, so its control design is more complex and thorny. Moreover, the traditional RL optimizing controls deduce the critic or actor updating law from the negative gradient of approximated Hamilton–Jacobi–Bellman (HJB) equation' square, thus it leads to these algorithms very complexity. Hence the traditional optimizing control methods are implemented to multi-QUAV attitude system difficultly. However, since this optimized scheme deduces the RL training laws from a simple positive function of equivalent with HJB equation, it can obviously simplify algorithm for the smooth application in the multi-QUAV attitude system. Finally, theory and simulation certify the feasibility of this optimized consensus control.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1469-1479"},"PeriodicalIF":5.3,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Tunable Framework for Joint Trade-Off Between Accuracy and Multi-Norm Robustness","authors":"Haonan Zheng;Xinyang Deng;Wen Jiang","doi":"10.1109/TETCI.2025.3540419","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540419","url":null,"abstract":"Adversarial training enhances the robustness of deep networks at the cost of reduced natural accuracy. Moreover, networks fortified struggle to simultaneously defend against both sparse and dense perturbations. Thus, achieving a better trade-off between natural accuracy and robustness against both types of noise remains an open challenge. Many proposed approaches explore solutions based on network architecture optimization. But, in most cases, the additional parameters introduced are static, meaning that once network training is completed, the performance remains unchanged, and retraining is required to explore other potential trade-offs. We propose two dynamic auxiliary modules, CBNI and CCNI, which can fine-tune convolutional layers and BN layers, respectively, during the inference phase, so that the trained network can still adjust its emphasis on natural examples, sparse perturbations or dense perturbations. This means our network can achieve an appropriate balance to adapt to the operational environment in situ, without retraining. Furthermore, fully exploring natural capability and robustness limits is a complex and time-consuming problem. Our method can serve as an efficient research tool to examine the achievable trade-offs with just a single training. It is worth mentioning that CCNI is a linear adjustment and CBNI does not directly participate in the inference process. Therefore, both of them don't introduce redundant parameters and inference latency. Experiments indicate that our network can indeed achieve a complex trade-off between accuracy and adversarial robustness, producing performance that is comparable to or even better than existing methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1490-1501"},"PeriodicalIF":5.3,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}