{"title":"HGRL-S: Towards Heterogeneous Graph Representation Learning With Optimized Structures","authors":"Shanfeng Wang;Dong Wang;Xiaona Ruan;Xiaolong Fan;Maoguo Gong;He Zhang","doi":"10.1109/TETCI.2025.3543414","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543414","url":null,"abstract":"Heterogeneous Graph Neural Networks (HetGNN) have garnered significant attention and demonstrated success in tackling various tasks. However, most existing HetGNNs face challenges in effectively addressing unreliable heterogeneous graph structures and encounter semantic indistinguishability problems as their depth increases. In an effort to deal with these challenges, we introduce a novel heterogeneous graph representation learning with optimized structures to optimize heterogeneous graph structures and utilize semantic aggregation mechanism to alleviate semantic indistinguishability while learning node embeddings. To address the heterogeneity of relations within heterogeneous graphs, the proposed algorithm employs a strategy of generating distinct relational subgraphs and incorporating them with node features to optimize structural learning. To resolve the issue of semantic indistinguishability, the proposed algorithm adopts a semantic aggregation mechanism to assign appropriate weights to different meta-paths, consequently enhancing the effectiveness of captured node features. This methodology enables the learning of distinguishable node embeddings by a deeper HetGNN model. Extensive experiments on the node classification task validate the promising performance of the proposed framework when compared with state-of-the-art methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2359-2370"},"PeriodicalIF":5.3,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CVRSF-Net: Image Emotion Recognition by Combining Visual Relationship Features and Scene Features","authors":"Yutong Luo;Xinyue Zhong;Jialan Xie;Guangyuan Liu","doi":"10.1109/TETCI.2025.3543300","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543300","url":null,"abstract":"Image emotion recognition, which aims to analyze the emotional responses of people to various stimuli in images, has attracted substantial attention in recent years with the proliferation of social media. As human emotion is a highly complex and abstract cognitive process, simply extracting local or global features from an image is not sufficient for recognizing the emotion of an image. The psychologist Moshe proposed that visual objects are usually embedded in a scene with other related objects during human visual comprehension of images. Therefore, we propose a two-branch emotion-recognition network known as the combined visual relationship feature and scene feature network (CVRSF-Net). In the scene feature-extraction branch, a pretrained CLIP model is adopted to extract the visual features of images, with a feature channel weighting module to extract the scene features. In the visual relationship feature-extraction branch, a visual relationship detection model is used to extract the visual relationships in the images, and a semantic fusion module fuses the scenes and visual relationship features. Furthermore, we spatially weight the visual relationship features using class activation maps. Finally, the implicit relationships between different visual relationship features are obtained using a graph attention network, and a two-branch network loss function is designed to train the model. The experimental results showed that the recognition rates of the proposed network were 79.80%, 69.81%, and 36.72% for the FI-8, Emotion-6, and WEBEmo datasets, respectively. The proposed algorithm achieves state-of-the-art results compared to existing methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2321-2333"},"PeriodicalIF":5.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feng Xiao;Ruyu Liu;Xu Cheng;Haoyu Zhang;Jianhua Zhang;Yaochu Jin
{"title":"Dual-Branch Semantic Enhancement Network Joint With Iterative Self-Matching Training Strategy for Semi-Supervised Semantic Segmentation","authors":"Feng Xiao;Ruyu Liu;Xu Cheng;Haoyu Zhang;Jianhua Zhang;Yaochu Jin","doi":"10.1109/TETCI.2025.3543272","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543272","url":null,"abstract":"With the rapid development of deep learning, supervised training methods have become increasingly sophisticated. There has been a growing trend towards semi-supervised and weakly supervised learning methods. This shift in focus is partly due to the challenges in obtaining large amounts of labeled data. The key to semi-supervised semantic segmentation is how to efficiently use a large amount of unlabeled data. A common practice is to use labeled data to generate pseudo labels for unlabeled data. However, the pseudo labels generated by these operations are of low quality, which severely interferes with the subsequent segmentation task. In this work, we propose to use the iterative self-matching strategy to enhance the self-training strategy, through which the quality of pseudo labels can be significantly improved. In practice, we split unlabeled data into two confidence types, i.e., reliable images and unreliable images, by an adaptive threshold. Using our iterative self-matching strategy, all reliable images are automatically added to the training dataset in each training iteration. At the same time, our algorithm employs an adaptive selection mechanism to filter out the highest-scoring pseudo labels of unreliable images, which are then used to further expand the training data. This iterative process enhances the reliability of the pseudo labels generated by the model. Based on this idea, we propose a novel semi-supervised semantic segmentation framework called SISS-Net. We conducted experiments on three public benchmark datasets: Pascal VOC 2012, COCO, and Cityscapes. The experimental results show that our method outperforms the supervised training method by 9.3%. In addition, we performed various joint ablation experiments to validate the effectiveness of our method.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2308-2320"},"PeriodicalIF":5.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harnessing the Power of Knowledge Graphs to Improve Causal Discovery","authors":"Taiyu Ban;Xiangyu Wang;Lyuzhou Chen;Derui Lyu;Xi Fan;Huanhuan Chen","doi":"10.1109/TETCI.2025.3540429","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540429","url":null,"abstract":"Reconstructing the structure of causal graphical models from observational data is crucial for identifying causal mechanisms in scientific research. However, real-world noise and hidden factors can make it difficult to detect true underlying causal relationships. Current methods mainly rely on extensive expert analysis to correct wrongly identified connections, guiding structure learning toward more accurate causal interactions. This reliance on expert input demands significant manual effort and is risky due to potential erroneous judgments when handling complex causal interactions. To address these issues, this paper introduces a new, expert-free method to improve causal discovery. By utilizing the extensive resources of static knowledge bases across various fields, specifically knowledge graphs (KGs), we extract causal information related to the variables of interest and use these as prior constraints in the structure learning process. Unlike the detailed constraints provided by expert analysis, the information from KGs is more general, indicating the presence of certain paths without specifying exact connections or their lengths. We incorporate these constraints in a soft way to reduce potential noise in the KG-derived priors, ensuring that our method remains reliable. Moreover, we provide interfaces for various mainstream causal discovery methods to enhance the utility of our approach. For empirical validation, we apply our approach across multiple areas of causal discovery. The results show that our method effectively enhances data-based causal discovery and demonstrates its promising applications.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2256-2268"},"PeriodicalIF":5.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Hierarchical Generative Model for Semi-Supervised Semantic Segmentation of Biomedical Images","authors":"Lu Chai;Zidong Wang;Yuheng Shao;Qinyuan Liu","doi":"10.1109/TETCI.2025.3540418","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3540418","url":null,"abstract":"In biomedical vision research, a significant challenge is the limited availability of pixel-wise labeled data. Data augmentation has been identified as a solution to this issue through generating labeled dummy data. While enhancing model efficacy, semi-supervised learning methodologies have emerged as a promising alternative that allows models to train on a mix of limited labeled and larger unlabeled data sets, potentially marking a significant advancement in biomedical vision research. Drawing from the semi-supervised learning strategy, in this paper, a novel medical image segmentation model is presented that features a hierarchical architecture with an attention mechanism. This model disentangles the synthesis process of biomedical images by employing a tail two-branch generator for semantic mask synthesis, thereby excelling in handling medical images with imbalanced class characteristics. During inference, the k-means clustering algorithm processes feature maps from the generator by using the clustering outcome as the segmentation mask. Experimental results show that this approach preserves biomedical image details more accurately than synthesized semantic masks. Experiments on various datasets, including those for vestibular schwannoma, kidney, and skin cancer, demonstrate the proposed method's superiority over other generative-adversarial-network-based and semi-supervised segmentation methods in both distribution fitting and semantic segmentation performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2219-2231"},"PeriodicalIF":5.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Scale Fuzzy Holistic Attention Network for Diabetic Retinopathy Grading From Fundus Images","authors":"Zhijie Lin;Zhaoshui He;Xu Wang;Wenqing Su;Ji Tan;Yamei Deng;Shengli Xie","doi":"10.1109/TETCI.2025.3543361","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543361","url":null,"abstract":"Diabetic Retinopathy (DR) is one of the leading causes of visual impairment and blindness in diabetic patients worldwide. Accurate Computer-Aided Diagnosis (CAD) systems can aid in the early diagnosis and treatment of DR patients to reduce the risk of vision loss, but it remains challenging due to the following reasons: 1) the relatively low contrast and ambiguous boundaries between pathological lesions and normal retinal regions, and 2) the considerable diversity in lesion size and appearance. In this paper, a Cross-Scale Fuzzy Holistic Attention Network (CSFHANet) is proposed for DR grading using fundus images, and it consists of two main components: Fuzzy-Enhanced Holistic Attention (FEHA) and Fuzzy Learning-based Cross-Scale Fusion (FLCSF). FEHA is developed to adaptively recalibrate the importance of feature elements by assigning fuzzy weights across both channel and spatial domains, which can enhance the model's ability to learn the features of lesion regions while reducing the interference from irrelevant information in normal retinal regions. Then, the FLCSF module is designed to eliminate the uncertainty in fused multi-scale features derived from different branches by utilizing fuzzy membership functions, producing a more comprehensive and refined feature representation from complex DR lesions. Extensive experiments on the Messidor-2 and DDR datasets demonstrate that the proposed CSFHANet exhibits superior performance compared to state-of-the-art methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2164-2178"},"PeriodicalIF":5.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative Network Correction to Promote Incremental Learning","authors":"Justin Leo;Jugal Kalita","doi":"10.1109/TETCI.2025.3543370","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543370","url":null,"abstract":"Neural networks are often designed for closed environments that are not open to acquisition of new knowledge. Incremental learning techniques allow neural networks to adapt to changing environments, but these methods often encounter challenges causing models to suffer from low classification accuracies. The main problem faced is catastrophic forgetting and this problem is more harmful when using incremental strategies compared to regular tasks. Some known causes of catastrophic forgetting are weight drift and inter-class confusion; these problems cause the network to erroneously fuse trained classes or to forget a learned class. This paper addresses these issues by focusing on data pre-processing and using network feedback corrections for incremental learning. Data pre-processing is important as the quality of the training data used affects the network's ability to maintain continuous class discrimination. This approach uses a generative model to modify the data input for the incremental model. Network feedback corrections would allow the network to adapt to newly found classes and scale based on network need. With combination of generative data pre-processing and network feedback, this paper proposes an approach for efficient long-term incremental learning. The results obtained are compared with similar state-of-the-art algorithms and show high incremental accuracy levels.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2334-2343"},"PeriodicalIF":5.3,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wanjing Zhao;Yunpeng Xiao;Tun Li;Rong Wang;Qian Li;Guoyin Wang
{"title":"A Cross-Domain Recommendation Model Based on Asymmetric Vertical Federated Learning and Heterogeneous Representation","authors":"Wanjing Zhao;Yunpeng Xiao;Tun Li;Rong Wang;Qian Li;Guoyin Wang","doi":"10.1109/TETCI.2025.3543313","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3543313","url":null,"abstract":"Cross-domain recommendation meets the personalized needs of users by integrating user preference features from different fields. However, the current cross-domain recommendation algorithm needs to be further strengthened in terms of privacy protection. This paper proposes a cross-domain recommendation model based on asymmetric vertical federated learning and heterogeneous representation. This model can improve the accuracy and diversity of recommendations under the premise of privacy protection. Firstly, we propose a privacy set intersection model based on data augmentation. This model improves the data imbalance among participants by introducing obfuscation sets. It can conceal the true data volumes of each party, thereby protecting the sensitive information of weaker parties. Secondly, we propose a heterogeneous representation method based on a walking strategy incorporating interaction timing. This method combines users' recent interests to generate node sequences that reflect the characteristics of user preferences. Then we use the Skip-Gram model to represent the node sequence in a low-dimensional embedding. Finally, we propose a cross-domain recommendation model based on vertical federated learning. This model uses the federated factorization machine to complete the interest prediction and protect the privacy data security of each domain. Experiments show that on the real data set, the model can further guarantee the data security of each participant in the asymmetric federated learning. It can also improve the recommendation accuracy on the target domain.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2344-2358"},"PeriodicalIF":5.3,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling of Spiking Neural Network With Optimal Hidden Layer via Spatiotemporal Orthogonal Encoding for Patterns Recognition","authors":"Zenan Huang;Yinghui Chang;Weikang Wu;Chenhui Zhao;Hongyan Luo;Shan He;Donghui Guo","doi":"10.1109/TETCI.2025.3537944","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3537944","url":null,"abstract":"The Spiking Neural Network (SNN) diverges from conventional rate-based network models by showcasing remarkable biological fidelity and advanced spatiotemporal computation capabilities, precisely converting input spike sequences into firing activities. This paper introduces the Spiking Optimal Neural Network (SONN), a model that integrates spiking neurons with spatiotemporal orthogonal polynomials to enhance pattern recognition capabilities. SONN innovatively integrates orthogonal polynomials and complex domain transformations seamlessly into neural dynamics, aiming to elucidate neural encoding and enhance cognitive computing capabilities. The dynamic integration of SONN enables continuous optimization of encoding methodologies and layer structures, showcasing its adaptability and refinement over time. Fundamentally, the model provides an adjustable method based on orthogonal polynomials and the corresponding complex-valued neuron model, striking a balance between network scalability and output accuracy. To evaluate its performance, SONN underwent experiments using datasets from the UCI Machine Learning Repository, the Fashion-MNIST dataset, the CIFAR-10 dataset and neuromorphic DVS128 Gesture dataset. The results show that smaller-sized SONN architectures achieve comparable accuracy in benchmark datasets compared to other SNNs.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2194-2207"},"PeriodicalIF":5.3,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MSDT: Multiscale Diffusion Transformer for Multimodality Image Fusion","authors":"Caifeng Xia;Hongwei Gao;Wei Yang;Jiahui Yu","doi":"10.1109/TETCI.2025.3542146","DOIUrl":"https://doi.org/10.1109/TETCI.2025.3542146","url":null,"abstract":"Multimodal image fusion is a vital technique that integrates images from various sensors to create a comprehensive and coherent representation, with broad applications in surveillance, medical imaging, and autonomous driving. However, current fusion methods struggle with inadequate feature representation, limited global context understanding due to the small receptive fields of convolutional neural networks (CNNs), and the loss of high-frequency information, all of which lead to suboptimal fusion quality. To address these challenges, we propose the Multi-Scale Diffusion Transformer (MSDT), a novel fusion framework that seamlessly combines a latent diffusion model with a transformer-based architecture. MSDT uses a perceptual compression network to encode source images into a low-dimensional latent space, reducing computational complexity while preserving essential features. It also incorporates a multiscale feature fusion mechanism, enhancing both detail and structural understanding. Additionally, MSDT features a self-attention module to extract unique high-frequency features and a cross-attention module to identify common low-frequency features across modalities, improving contextual understanding. Extensive experiments on three datasets show that MSDT significantly outperforms state-of-the-art methods across twelve evaluation metrics, achieving an SSIM score of 0.98. Moreover, MSDT demonstrates superior robustness and generalizability, highlighting the potential of integrating diffusion models with transformer architectures for multimodal image fusion.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2269-2283"},"PeriodicalIF":5.3,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}