Zihao Zheng , Borui Cai , Yong Xiang , Yao Zhao , Md Palash Uddin , Keshav Sood
{"title":"A federated compositional knowledge graph embedding for communication efficiency","authors":"Zihao Zheng , Borui Cai , Yong Xiang , Yao Zhao , Md Palash Uddin , Keshav Sood","doi":"10.1016/j.knosys.2025.113873","DOIUrl":"10.1016/j.knosys.2025.113873","url":null,"abstract":"<div><div>Knowledge Graph Embedding (KGE), which automatically capture structural information from Knowledge Graphs (KGs), are essential for enhancing various downstream applications, such as recommender systems. To further improve the effectiveness of KGE models, Federated Knowledge Graph Embedding (FKGE) is introduced. It enables privacy-preserving integration of KGs across multiple organizations. However, existing FKGE frameworks require aggregation of a large global KGE model (embeddings). resulting in significant communication overhead, thereby reducing the efficiency and utility of FKGE in practical scenarios. To address this challenge, we propose Federated Compositional Knowledge Graph Embedding (FedComp), which enhances communication efficiency by leveraging the compositional characteristics of KG entities. In FedComp, we design a lightweight global model that represents shareable latent features of entities. These global latent features are composed into personalized KGE models with local embedding generators on the clients, improving both local adaptability and performance. By this, FedComp can significantly reduce the number of parameters that need to be transmitted. Experimental results show that FedComp outperforms state-of-the-art FKGE frameworks on link prediction accuracy, with only around 1.0% communication overhead compared to counterpart frameworks.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"325 ","pages":"Article 113873"},"PeriodicalIF":7.2,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144321690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Ji , Yongyuan Zhu , Siliang Lu , Lixia Yang , Alan Wee-Chung Liew
{"title":"WTC-iPST: A deep learning framework for short-term electric load forecasting with multi-scale feature extraction","authors":"Yi Ji , Yongyuan Zhu , Siliang Lu , Lixia Yang , Alan Wee-Chung Liew","doi":"10.1016/j.knosys.2025.113907","DOIUrl":"10.1016/j.knosys.2025.113907","url":null,"abstract":"<div><div>Short-term electric load forecasting is essential for efficient power system operation, but existing deep learning models struggle to capture the multi-scale features and cyclical fluctuations inherent in short-term load data. This paper introduces a novel deep learning model, Wavelet Transform Convolution-inverted ProbSparse Transformer (WTC-iPST), specifically designed for short-term load forecasting. Unlike existing deep learning models, WTC-iPST leverages Wavelet Transform Convolution (WTConv) for multi-scale feature extraction and integrates Wavelet Kolmogorov-Arnold Networks (Wav-KAN) to enhance the ProbSparse self-attention mechanism, significantly improving the model's ability to capture multi-scale features and cyclical fluctuations inherent in short-term load data. This design addresses the challenge of extracting multi-scale and cyclical features from short-term load data, which existing models struggle with, and strengthens the model's capacity to handle long series. Additionally, WTC-iPST incorporates quantile regression to quantify uncertainty and provide confidence intervals, further enhancing the prediction's reliability and accuracy. Experimental results on real-world datasets demonstrate that WTC-iPST outperforms state-of-the-art forecasting models, with significant improvements over the baseline iTransformer, achieving reductions of up to 16.84 % in RMSE, 18.09 % in MAPE, and 17.65 % in RRMSE, as well as an increase of up to 2.96 % in R². In terms of probabilistic prediction, WTC-iPST consistently maintains a narrow confidence interval with high interval coverage. Moreover, WTC-iPST shows strong performance across various prediction horizons and different distribution substations, highlighting its robustness and adaptability. These results confirm that WTC-iPST provides more accurate and reliable forecasts, making it a valuable tool for power system dispatch and operational planning.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"324 ","pages":"Article 113907"},"PeriodicalIF":7.2,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Yuan , Long Chen , Jiaqiang Wang , Yi Guo , Zhengnan Gao , Liang Zhao
{"title":"Normalizing flow-enhanced Gaussian embedding for few-shot knowledge graph completion","authors":"Xu Yuan , Long Chen , Jiaqiang Wang , Yi Guo , Zhengnan Gao , Liang Zhao","doi":"10.1016/j.knosys.2025.113874","DOIUrl":"10.1016/j.knosys.2025.113874","url":null,"abstract":"<div><h3>Objectives:</h3><div>Few-shot knowledge graph completion (FKGC) aims to infer missing facts of the query triples based on few-shot reference entity pairs. However, existing FKGC approaches often overlook the inherent uncertainty of relations in KGs, as deterministic semantic representations derived from sparse samples may be unreliable. Meanwhile, they neglect both noisy neighbor aggregation and inter-neighbor interactions, as well as the handling of complex relations, which largely limits model performance. This paper aims to overcome these limitations and enhance FKGC performance.</div></div><div><h3>Methods:</h3><div>This paper proposes a method incorporated normalizing flow with Gaussian Network for FKGC, namely NFGN. Specifically, we combine normalizing flow-enhanced Gaussian distribution to model the few-shot settings and multi-semantics uncertainty of relations, which learns the uncertain semantics of entity features based on limited data. Then, we introduce the GD-TransE decoder, which incorporates relation uncertainty to handle complex relations. To improve the model’s effectiveness, a gated neighbor encoder is designed to model semantic interactions among neighbors, and control the activation of noisy neighbors through gating thresholds.</div></div><div><h3>Novelty:</h3><div>This paper presents the first study that integrates normalizing flows with Gaussian embeddings for FKGC, offering a more robust representation of uncertainty in relations. The proposed method further introduces the gated neighbor encoder and GD-TransE decoder to handle neighborhood noise and complex relationships, thereby overcoming the limitations of existing FKGC methods.</div></div><div><h3>Findings:</h3><div>Extensive experiments conducted on three diverse benchmark datasets demonstrate that our method significantly outperforms state-of-the-art performance, achieving improvements of 5.7%, 2.4%, 10.6%, and 6.7% in MRR, Hits@1, Hits@5, and Hits@10, respectively.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"324 ","pages":"Article 113874"},"PeriodicalIF":7.2,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144290437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amos Bortiew , Swarnajyoti Patra , Lorenzo Bruzzone
{"title":"Dictionary learning using novel multiscale context sensitive spectral features for classification of hyperspectral imagery","authors":"Amos Bortiew , Swarnajyoti Patra , Lorenzo Bruzzone","doi":"10.1016/j.knosys.2025.113853","DOIUrl":"10.1016/j.knosys.2025.113853","url":null,"abstract":"<div><div>Sparse representation models for the classification of hyperspectral images have been greatly enhanced by dictionary learning techniques. The effectiveness of these techniques depends on the discriminative power of the patterns used to learn the dictionaries. In this research, to learn quality, discriminative and comprehensive dictionaries, we propose novel features extracted by exploiting singular value decomposition (SVD). Here, SVD is exploited to extract context-sensitive spectral features (CSSF) of the pixel by taking into account its appropriate spatial neighbor pixels. In the proposed technique, multiple CSSFs are extracted by considering spatial neighborhood of the pixel at different scales to learn dictionaries for classification. The effectiveness of the proposed technique is evaluated by comparing it with several state-of-the-art techniques.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"324 ","pages":"Article 113853"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144280931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SFUDNet: Underwater object detection via spatial-frequency domain modulation with mixture of experts","authors":"Hu Xu , Ju He , Changsong Pang , Yang Yu","doi":"10.1016/j.knosys.2025.113805","DOIUrl":"10.1016/j.knosys.2025.113805","url":null,"abstract":"<div><div>Underwater object detection plays a crucial role in aquaculture and marine environmental protection. Compared to conventional images, underwater images often suffer from challenges such as low brightness, color distortions, blurred details, and noise. In recent years, frequency-domain techniques have demonstrated significant potential in underwater image processing. Notably, distinct patterns in blur distribution can be observed across the low-frequency and high-frequency components of underwater images from various datasets. In response to these challenges, we propose a novel spatial-frequency domain modulation underwater object detection network, termed SFUDNet. Unlike existing spatial-domain underwater object detection methods, SFUDNet introduces an innovative spatial-frequency decoupling structure with a mixture of expert mechanism, which is implemented through the proposed frequency modulation block (FMB) and spatial-frequency integration (SFI) module. The FMB employs a mixture-of-expert approach to dynamically learn diverse frequency features across different granularities and scales in a sample-adaptive manner, subsequently performing element-wise local feature modulation. Meanwhile, the SFI module effectively integrates frequency-domain features with spatial-domain features, enabling a more comprehensive representation of underwater scenes. Extensive experiments on publicly available underwater datasets demonstrate that SFUDNet achieves state-of-the-art performance, outperforming existing underwater object detection baselines in both detection accuracy and robustness.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"324 ","pages":"Article 113805"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144279174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Tahir Rasheed , Hufsa Khan , Junsong Wang , Yan Kang
{"title":"Advancing low-light image enhancement through deep learning: A comprehensive experimental study","authors":"Muhammad Tahir Rasheed , Hufsa Khan , Junsong Wang , Yan Kang","doi":"10.1016/j.knosys.2025.113827","DOIUrl":"10.1016/j.knosys.2025.113827","url":null,"abstract":"<div><div>Low-light photography severely degrades the perceptual quality of images, which adversely affects the performance of computer vision algorithms. Deep learning-based low-light image enhancement (LLIE) methods are dominating in improving the quality of degraded and corrupted images taken in non-optimal lighting conditions. Either the designed methods are evaluated on a limited set of test datasets or they are not evaluated for machine vision applications. A detailed examination of the recent developments, their generalization, and their application to computer vision tasks is required. This experimental review highlights the future trend of recent learning-based LLIE methods through statistical analysis, experimentally analyzing their generalization capability on a wide spectrum of test datasets, examining the effectiveness of LLIE in computer vision applications, and discussing a correlation between them. The test data used for the generality of these methods covers diversified scenes/contents as well as complex degradation in real scenarios. Rich variety of full-reference and no-reference metrics are applied to compare the relative performances. Furthermore, the application of enhancement methods in low-light face detection is also validated to examine the effectiveness of these LLIE methods as a preprocessing step in machine vision tasks. The discussion on correlation of experimental results from the perspective of both human and machine vision in the subsequent part provides broader view of the field. This systematic review concludes with the limitations of enhancement methodologies and unresolved issues.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"325 ","pages":"Article 113827"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144312925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruiting Dai , Haoran Meng , Zhengdao Yuan , Lisi Mo , Wenwei Zhu , Tao He
{"title":"A unified cross-source context enhancement model for multi-source fake news detection","authors":"Ruiting Dai , Haoran Meng , Zhengdao Yuan , Lisi Mo , Wenwei Zhu , Tao He","doi":"10.1016/j.knosys.2025.113867","DOIUrl":"10.1016/j.knosys.2025.113867","url":null,"abstract":"<div><div>The growing prevalence of misinformation across digital platforms poses a serious threat to public information integrity. While existing fake news detection methods have predominantly focused on single-source data, such approaches often fail to address the complexities introduced by multi-source information originating from diverse platforms. Detecting fake news in this multi-source context remains a relatively underexplored challenge. Traditional cross-domain methods typically transfer domain-invariant features using single-source modeling strategies, yet they neglect the rich contextual cues available across sources—an aspect critical for enhancing detection accuracy. To overcome these limitations, we propose a unified framework that effectively captures global contextual information by jointly leveraging intra-source and inter-source interactions. Our model integrates two principal components: (1) a cross-source global context learning module employing a context-augmented transformer to model long-range dependencies among multi-source instances, and (2) a dual-level contrastive learning mechanism that aligns representations at both local and global levels, reducing inconsistencies across feature spaces and source domains. Extensive experiments conducted on publicly available multi-source datasets demonstrate that our method achieves substantial improvements over existing state-of-the-art approaches. Specifically, it yields an approximate 5% gain in classification accuracy compared to leading models such as LIMFA, highlighting the robustness and effectiveness of our framework in advancing multi-source fake news detection.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"324 ","pages":"Article 113867"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Foundation model-assisted interpretable vehicle behavior decision making","authors":"Shiyu Meng, Yi Wang, Yawen Cui, Lap-Pui Chau","doi":"10.1016/j.knosys.2025.113868","DOIUrl":"10.1016/j.knosys.2025.113868","url":null,"abstract":"<div><div>Intelligent autonomous driving systems must achieve accurate perception and driving decisions to enhance their effectiveness and adoption. Currently, driving behavior decisions have achieved high performance thanks to deep learning technology. However, most existing approaches lack interpretability, reducing user trust and hindering widespread adoption. While some efforts focus on transparency through strategies like heat maps, cost-volume, and auxiliary tasks, they often provide limited model interpretation or require additional annotations. In this paper, we present a novel unified framework to tackle these issues by integrating ego-vehicle behavior decisions with human-centric language-based interpretation prediction from ego-view visual input. First, we propose a self-supervised class-agnostic object Segmentor module based on Segment Anything Model and 2-D light adapter strategy, to capture the overall surrounding cues without any extra segmentation mask labels. Second, the semantic extractor is adopted to generate the hierarchical semantic-level cues. Subsequently, a fusion module is designed to generate the refined global features by incorporating the class-agnostic object features and semantic-level features using a self-attention mechanism. Finally, vehicle behavior decisions and possible human-centric interpretations are jointly generated based on the global fusion context. The experimental results across various settings on the public datasets demonstrate the superiority and effectiveness of our proposed solution.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"324 ","pages":"Article 113868"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144290436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhijing Hu , Hao Yan , Kuihua Huang , Jincai Huang , Zhong Liu , Changjun Fan
{"title":"DTIU: A self-supervised grid-enhanced diffusion model for trajectory imputation in unconstrained scenarios","authors":"Zhijing Hu , Hao Yan , Kuihua Huang , Jincai Huang , Zhong Liu , Changjun Fan","doi":"10.1016/j.knosys.2025.113848","DOIUrl":"10.1016/j.knosys.2025.113848","url":null,"abstract":"<div><div>While the widespread of location-based services has led to a proliferation of trajectory data, it is often accompanied by the inevitable problem of incomplete data due to various reasons, e.g., sensor failure and privacy concerns. Imputing the incomplete trajectory is critically important to a range of practical applications, e.g., traffic management and emergency response. Most recent proposals on trajectory imputation are designed for the urban scenario where a road network is available, which may not work well in the unconstrained environment scenario. As there are no road networks or predefined paths in the unconstrained environment, existing approaches for trajectory imputation may result in insufficient input data and thus lead to sub-optimal performance. To address this issue, we propose a self-supervised grid-enhanced <u><strong>D</strong></u>iffusion model for <u><strong>T</strong></u>rajectory <u><strong>I</strong></u>mputation in <u><strong>U</strong></u>nconstrained scenarios (DTIU). DTIU includes a diffusion model specifically designed for trajectory imputation tasks. DTIU avoids the problem of insufficient input by using trajectory data as the only input. To contend with missing position information and effectively learn the spatio-temporal correlations, we design a GridFormer based on AutoEncoder with an adaptive grid information enhancement strategy. DTIU adopts a self-supervised training strategy inspired by masked language models, aiming at address the data sparsity issue. Extensive experiments on real data offer insight into the effectiveness of the proposed framework. This marks the first application of diffusion models to tackle trajectory imputation in unconstrained scenarios.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"325 ","pages":"Article 113848"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144312930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised conversion method of high bit-depth remote sensing images using contrastive learning","authors":"Tengda Zhang , Jiguang Dai , Jinsong Cheng","doi":"10.1016/j.knosys.2025.113954","DOIUrl":"10.1016/j.knosys.2025.113954","url":null,"abstract":"<div><div>Currently, remote sensing images are frequently stored in high bit-depth formats exceeding 10 bits. However, the standard 8-bit format remains the fundamental data format for visualization and deep learning applications. Traditional methods typically rely on manually adjusting the parameter threshold of the tone mapping operator to obtain 8-bit images, resulting in low automation. Although tone mapping methods based on deep learning have gradually supplanted traditional techniques, but such methods are mainly aimed at natural scene images taken by digital cameras. There are problems such as incompatibility between data format and image semantics, and it is difficult to meet the scale dependence of remote sensing image applications. To address these challenges, we propose an unsupervised bit-depth conversion method for remote sensing images that integrates generative adversarial networks with contrastive learning. We draw an analogy between gray value mapping and the motion of thermal field particles, constructing a transformer generator based on thermodynamic principles. Leveraging the analogous characteristics of high and low bit-depth image histograms, we introduce a histogram shape context contrastive loss to regulate the color distribution of the generated images. Furthermore, in light of the large-scale application characteristics of remote sensing images, we propose a post-processing method based on hybrid histogram matching to enhance image quality while generating seamless whole-scene images. We developed relevant datasets and conducted experiments, with results demonstrating that the proposed method achieves superior bit-depth conversion effects compared to existing methods. Code and data can be found at <span><span>https://github.com/ZzzTD/Bit-depth_conversion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"324 ","pages":"Article 113954"},"PeriodicalIF":7.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144290435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}