{"title":"HGRL-S: Towards Heterogeneous Graph Representation Learning With Optimized Structures","authors":"Shanfeng Wang;Dong Wang;Xiaona Ruan;Xiaolong Fan;Maoguo Gong;He Zhang","doi":"10.1109/TETCI.2025.3543414","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543414","url":null,"abstract":"Heterogeneous Graph Neural Networks (HetGNN) have garnered significant attention and demonstrated success in tackling various tasks. However, most existing HetGNNs face challenges in effectively addressing unreliable heterogeneous graph structures and encounter semantic indistinguishability problems as their depth increases. In an effort to deal with these challenges, we introduce a novel heterogeneous graph representation learning with optimized structures to optimize heterogeneous graph structures and utilize semantic aggregation mechanism to alleviate semantic indistinguishability while learning node embeddings. To address the heterogeneity of relations within heterogeneous graphs, the proposed algorithm employs a strategy of generating distinct relational subgraphs and incorporating them with node features to optimize structural learning. To resolve the issue of semantic indistinguishability, the proposed algorithm adopts a semantic aggregation mechanism to assign appropriate weights to different meta-paths, consequently enhancing the effectiveness of captured node features. This methodology enables the learning of distinguishable node embeddings by a deeper HetGNN model. Extensive experiments on the node classification task validate the promising performance of the proposed framework when compared with state-of-the-art methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2359-2370"},"PeriodicalIF":5.3,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VFL+: Low-Coupling Vertical Federated Learning With Privileged Information Paradigm","authors":"Wei Dai;Teng Cui;Tong Zhang;Badong Chen","doi":"10.1109/TETCI.2025.3543769","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543769","url":null,"abstract":"Vertical Federated Learning (VFL) enables the construction of models by combining clients with different features without compromising privacy. Existing VFL methods exhibit tightly coupled participant parameters, resulting in substantial interdependencies among clients during the prediction phase, which significantly hampers the model's usability. To tackle these challenges, this paper studies a VFL approach with low coupling of parameters between clients. Drawing inspiration from federated cooperation and teacher-supervised learning, we propose a low-coupling vertical federated learning with privileged information paradigm (VFL+), allowing participants to make autonomous predictions. Specifically, VFL+ treats information from other clients as privileged data during the training phase rather than the testing phase, thereby achieving independence in individual model predictions. Subsequently, this paper further investigates three typical scenarios of vertical cooperation and designs corresponding cooperative frameworks. Systematic experiments on real data sets demonstrate the effectiveness of the proposed method.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 5","pages":"3533-3547"},"PeriodicalIF":5.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145128536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Anxiety via Machine Learning Algorithms: A Literature Review","authors":"M.-H. Tayarani-N.;Shamim Ibne Shahid","doi":"10.1109/TETCI.2025.3543307","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543307","url":null,"abstract":"Recent machine learning (ML) advances have opened up new possibilities for addressing various challenges. Given their ability to tackle complex problems, the use of ML algorithms in diagnosing mental health disorders has seen substantial growth in both the number and scope of studies. Anxiety, a major health concern in today's world, affects a significant portion of the population. Individuals with anxiety often exhibit distinct characteristics compared to those without the disorder. These differences can be observed in their outward appearance—such as voice, facial expressions, gestures, and movements—and in less visible factors like heart rate, blood test results, and brain imaging data. In this context, numerous studies have utilized ML algorithms to extract a diverse range of features from individuals with anxiety, aiming to build predictive models capable of accurately identifying those affected by the disorder. This paper performs a comprehensive literature review on the state-of-the-art studies that employ machine learning algorithms to identify anxiety. This paper aims to cover a wide range of studies and categorize them based on their methodologies and data types used.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"2634-2657"},"PeriodicalIF":5.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CVRSF-Net: Image Emotion Recognition by Combining Visual Relationship Features and Scene Features","authors":"Yutong Luo;Xinyue Zhong;Jialan Xie;Guangyuan Liu","doi":"10.1109/TETCI.2025.3543300","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543300","url":null,"abstract":"Image emotion recognition, which aims to analyze the emotional responses of people to various stimuli in images, has attracted substantial attention in recent years with the proliferation of social media. As human emotion is a highly complex and abstract cognitive process, simply extracting local or global features from an image is not sufficient for recognizing the emotion of an image. The psychologist Moshe proposed that visual objects are usually embedded in a scene with other related objects during human visual comprehension of images. Therefore, we propose a two-branch emotion-recognition network known as the combined visual relationship feature and scene feature network (CVRSF-Net). In the scene feature-extraction branch, a pretrained CLIP model is adopted to extract the visual features of images, with a feature channel weighting module to extract the scene features. In the visual relationship feature-extraction branch, a visual relationship detection model is used to extract the visual relationships in the images, and a semantic fusion module fuses the scenes and visual relationship features. Furthermore, we spatially weight the visual relationship features using class activation maps. Finally, the implicit relationships between different visual relationship features are obtained using a graph attention network, and a two-branch network loss function is designed to train the model. The experimental results showed that the recognition rates of the proposed network were 79.80%, 69.81%, and 36.72% for the FI-8, Emotion-6, and WEBEmo datasets, respectively. The proposed algorithm achieves state-of-the-art results compared to existing methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2321-2333"},"PeriodicalIF":5.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GAMR: Revolutionizing Multi-Objective Routing in SDN Networks With Dynamic Genetic Algorithms","authors":"Hai-Anh Tran;Cong-Son Duong;Trong-Duc Bui;Van Tong;Huynh Thi Thanh Binh","doi":"10.1109/TETCI.2025.3543836","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543836","url":null,"abstract":"The growing complexity of modern network systems has increased the need for efficient multi-objective routing (MOR) algorithms. However, existing MOR methods face significant challenges, particularly in terms of computation time, which becomes problematic in networks with short-lived tasks where rapid decision-making is essential. The Non-dominated Sorting Genetic Algorithm II (NSGA-II) offers a promising approach to addressing these challenges. Nevertheless, directly applying NSGA-II in dynamic network environments, where states frequently change, is impractical. This paper presents GAMR, an enhanced non-dominated sorting Genetic Algorithm II-based dynamic multi-objective QoS routing algorithm, which leverages QoS metrics for its multi-objective function. Introducing novel initialization and crossover strategies, our approach efficiently identifies optimal solutions within a brief runtime. Implemented within a Software-defined Network controller for routing, GAMR outperforms existing multi-objective algorithms, exhibiting notable improvements in performance indicators. Specifically, enhancements range from 3.4% to 22.8% on the Hypervolume metric and from 33% to 86% on the Inverted Generational Distance metric. In terms of network metrics, experimental results demonstrate significant reductions in forwarding delay and packet loss rate to 41.25 ms and 3.9%, respectively, even under challenging network configurations with only 2 servers and up to 100 requests.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"3147-3161"},"PeriodicalIF":5.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feng Xiao;Ruyu Liu;Xu Cheng;Haoyu Zhang;Jianhua Zhang;Yaochu Jin
{"title":"Dual-Branch Semantic Enhancement Network Joint With Iterative Self-Matching Training Strategy for Semi-Supervised Semantic Segmentation","authors":"Feng Xiao;Ruyu Liu;Xu Cheng;Haoyu Zhang;Jianhua Zhang;Yaochu Jin","doi":"10.1109/TETCI.2025.3543272","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543272","url":null,"abstract":"With the rapid development of deep learning, supervised training methods have become increasingly sophisticated. There has been a growing trend towards semi-supervised and weakly supervised learning methods. This shift in focus is partly due to the challenges in obtaining large amounts of labeled data. The key to semi-supervised semantic segmentation is how to efficiently use a large amount of unlabeled data. A common practice is to use labeled data to generate pseudo labels for unlabeled data. However, the pseudo labels generated by these operations are of low quality, which severely interferes with the subsequent segmentation task. In this work, we propose to use the iterative self-matching strategy to enhance the self-training strategy, through which the quality of pseudo labels can be significantly improved. In practice, we split unlabeled data into two confidence types, i.e., reliable images and unreliable images, by an adaptive threshold. Using our iterative self-matching strategy, all reliable images are automatically added to the training dataset in each training iteration. At the same time, our algorithm employs an adaptive selection mechanism to filter out the highest-scoring pseudo labels of unreliable images, which are then used to further expand the training data. This iterative process enhances the reliability of the pseudo labels generated by the model. Based on this idea, we propose a novel semi-supervised semantic segmentation framework called SISS-Net. We conducted experiments on three public benchmark datasets: Pascal VOC 2012, COCO, and Cityscapes. The experimental results show that our method outperforms the supervised training method by 9.3%. In addition, we performed various joint ablation experiments to validate the effectiveness of our method.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2308-2320"},"PeriodicalIF":5.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AE-Net: Appearance-Enriched Neural Network With Foreground Enhancement for Person Re-Identification","authors":"Shangdong Zhu;Yunzhou Zhang;Yixiu Liu;Yu Feng;Sonya Coleman;Dermot Kerr","doi":"10.1109/TETCI.2025.3543775","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543775","url":null,"abstract":"Person re-identification (Re-ID) in environments subject to intensive appearance and background variations due to seasons, weather conditions, illumination and human factors is a challenging task. A wide variety of existing algorithms address this problem either for appearance changes or background clutter, but neglect to explore a powerful framework to consider solving both cases simultaneously. To overcome this limitation, this research introduces an effective appearance-enriched neural network (AE-Net) with foreground enhancement based on generative adversarial nets (GANs) and an attention mechanism to enrich the appearance of person images while suppressing the influence of the background. Specifically, a channel-grouped convolution and squeeze weighted (CGCSW) module is first proposed to extract the powerful feature representation of individuals. Secondly, a foreground-enhanced and background-suppressed (FEBS) module is proposed to enhance the foreground of individual samples while weakening the impact of the background. Thirdly, A stage-wise consistency loss is presented to enable our model maintain consistent foreground-enhanced and background-suppressed stages. Finally, this study evaluates the proposed method and compares it with state-of-the-art approaches on three public datasets. The experimental results demonstrate the effectiveness and improvements achieved by using the presented architecture.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 5","pages":"3518-3532"},"PeriodicalIF":5.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145128518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harnessing the Power of Knowledge Graphs to Improve Causal Discovery","authors":"Taiyu Ban;Xiangyu Wang;Lyuzhou Chen;Derui Lyu;Xi Fan;Huanhuan Chen","doi":"10.1109/TETCI.2025.3540429","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540429","url":null,"abstract":"Reconstructing the structure of causal graphical models from observational data is crucial for identifying causal mechanisms in scientific research. However, real-world noise and hidden factors can make it difficult to detect true underlying causal relationships. Current methods mainly rely on extensive expert analysis to correct wrongly identified connections, guiding structure learning toward more accurate causal interactions. This reliance on expert input demands significant manual effort and is risky due to potential erroneous judgments when handling complex causal interactions. To address these issues, this paper introduces a new, expert-free method to improve causal discovery. By utilizing the extensive resources of static knowledge bases across various fields, specifically knowledge graphs (KGs), we extract causal information related to the variables of interest and use these as prior constraints in the structure learning process. Unlike the detailed constraints provided by expert analysis, the information from KGs is more general, indicating the presence of certain paths without specifying exact connections or their lengths. We incorporate these constraints in a soft way to reduce potential noise in the KG-derived priors, ensuring that our method remains reliable. Moreover, we provide interfaces for various mainstream causal discovery methods to enhance the utility of our approach. For empirical validation, we apply our approach across multiple areas of causal discovery. The results show that our method effectively enhances data-based causal discovery and demonstrates its promising applications.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2256-2268"},"PeriodicalIF":5.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Hierarchical Generative Model for Semi-Supervised Semantic Segmentation of Biomedical Images","authors":"Lu Chai;Zidong Wang;Yuheng Shao;Qinyuan Liu","doi":"10.1109/TETCI.2025.3540418","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540418","url":null,"abstract":"In biomedical vision research, a significant challenge is the limited availability of pixel-wise labeled data. Data augmentation has been identified as a solution to this issue through generating labeled dummy data. While enhancing model efficacy, semi-supervised learning methodologies have emerged as a promising alternative that allows models to train on a mix of limited labeled and larger unlabeled data sets, potentially marking a significant advancement in biomedical vision research. Drawing from the semi-supervised learning strategy, in this paper, a novel medical image segmentation model is presented that features a hierarchical architecture with an attention mechanism. This model disentangles the synthesis process of biomedical images by employing a tail two-branch generator for semantic mask synthesis, thereby excelling in handling medical images with imbalanced class characteristics. During inference, the k-means clustering algorithm processes feature maps from the generator by using the clustering outcome as the segmentation mask. Experimental results show that this approach preserves biomedical image details more accurately than synthesized semantic masks. Experiments on various datasets, including those for vestibular schwannoma, kidney, and skin cancer, demonstrate the proposed method's superiority over other generative-adversarial-network-based and semi-supervised segmentation methods in both distribution fitting and semantic segmentation performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2219-2231"},"PeriodicalIF":5.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Scale Fuzzy Holistic Attention Network for Diabetic Retinopathy Grading From Fundus Images","authors":"Zhijie Lin;Zhaoshui He;Xu Wang;Wenqing Su;Ji Tan;Yamei Deng;Shengli Xie","doi":"10.1109/TETCI.2025.3543361","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543361","url":null,"abstract":"Diabetic Retinopathy (DR) is one of the leading causes of visual impairment and blindness in diabetic patients worldwide. Accurate Computer-Aided Diagnosis (CAD) systems can aid in the early diagnosis and treatment of DR patients to reduce the risk of vision loss, but it remains challenging due to the following reasons: 1) the relatively low contrast and ambiguous boundaries between pathological lesions and normal retinal regions, and 2) the considerable diversity in lesion size and appearance. In this paper, a Cross-Scale Fuzzy Holistic Attention Network (CSFHANet) is proposed for DR grading using fundus images, and it consists of two main components: Fuzzy-Enhanced Holistic Attention (FEHA) and Fuzzy Learning-based Cross-Scale Fusion (FLCSF). FEHA is developed to adaptively recalibrate the importance of feature elements by assigning fuzzy weights across both channel and spatial domains, which can enhance the model's ability to learn the features of lesion regions while reducing the interference from irrelevant information in normal retinal regions. Then, the FLCSF module is designed to eliminate the uncertainty in fused multi-scale features derived from different branches by utilizing fuzzy membership functions, producing a more comprehensive and refined feature representation from complex DR lesions. Extensive experiments on the Messidor-2 and DDR datasets demonstrate that the proposed CSFHANet exhibits superior performance compared to state-of-the-art methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2164-2178"},"PeriodicalIF":5.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}