Information Sciences最新文献

筛选
英文 中文
A multi-modal unsupervised machine learning approach for biomedical signal processing during cardiopulmonary resuscitation 心肺复苏过程中生物医学信号处理的多模态无监督机器学习方法
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-21 DOI: 10.1016/j.ins.2025.122114
Saidul Islam , Jamal Bentahar , Robin Cohen , Gaith Rjoub
{"title":"A multi-modal unsupervised machine learning approach for biomedical signal processing during cardiopulmonary resuscitation","authors":"Saidul Islam ,&nbsp;Jamal Bentahar ,&nbsp;Robin Cohen ,&nbsp;Gaith Rjoub","doi":"10.1016/j.ins.2025.122114","DOIUrl":"10.1016/j.ins.2025.122114","url":null,"abstract":"<div><div>Cardiopulmonary resuscitation (CPR) is a critical, life-saving intervention aimed at restoring blood circulation and breathing in individuals experiencing cardiac arrest or respiratory failure. Accurate and real-time analysis of biomedical signals during CPR is essential for monitoring and decision-making, from the pre-hospital stage to the intensive care unit (ICU). However, CPR signals are often corrupted by noise and artifacts, making precise interpretation challenging. Traditional denoising methods, such as filters, struggle to adapt to the varying and complex noise patterns present in CPR signals. Given the high-stakes nature of CPR, where rapid and accurate responses can determine survival, there is a pressing need for more robust and adaptive denoising techniques. In this context, an unsupervised machine learning (ML) methodology is particularly valuable, as it removes the dependence on labeled data, which can be scarce or impractical in emergency scenarios. This paper introduces a novel unsupervised ML approach for denoising CPR signals using a multi-modality framework, which leverages multiple signal sources to enhance the denoising process. The proposed approach not only improves noise reduction and signal fidelity but also preserves critical inter-signal correlations (0.9993) which is crucial for downstream tasks. Furthermore, it outperforms existing methods in an unsupervised context in terms of signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR), making it highly effective for real-time applications. The integration of multi-modality further enhances the system's adaptability to various biomedical signals beyond CPR, improving both automated CPR systems and clinical decision-making.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"712 ","pages":"Article 122114"},"PeriodicalIF":8.1,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised manifold regularized multi-task learning with privileged information 具有特权信息的半监督流形正则化多任务学习
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-21 DOI: 10.1016/j.ins.2025.122112
Bo Liu , Baoqing Li , Yanshan Xiao , Zhitong Wang , Boxu Zhou , Shengxin He , Chenlong Ye , Fan Cao
{"title":"Semi-supervised manifold regularized multi-task learning with privileged information","authors":"Bo Liu ,&nbsp;Baoqing Li ,&nbsp;Yanshan Xiao ,&nbsp;Zhitong Wang ,&nbsp;Boxu Zhou ,&nbsp;Shengxin He ,&nbsp;Chenlong Ye ,&nbsp;Fan Cao","doi":"10.1016/j.ins.2025.122112","DOIUrl":"10.1016/j.ins.2025.122112","url":null,"abstract":"<div><div>Multi-task learning (MTL) represents an advanced learning paradigm that improves the generalization ability and learning efficiency of a model by learning multiple related tasks simultaneously. The fundamental principle of multi-task learning is the transfer of information between tasks. Nevertheless where data is limited in quantity, effectively modeling inter-task correlations is a significant challenge. We propose a novel method, semi-supervised manifold regularized multi-task learning with privileged information (MSMTL-PI), that effectively leverages the intrinsic geometric structure of data by enforcing manifold regularization and subspace learning techniques. Specifically, a similarity graph is constructed over both labeled and unlabeled samples, ensuring the preservation of local geometric relationships between data points, and manifold regularization is applied as a constraint. Concurrently, information sharing on low-dimensional subspace makes the relationship modeling between tasks more reasonable. Furthermore, a significant amount of privileged information is incorporated into the training phase, thereby optimizing the decision boundary and reducing the impact of insufficient labeled samples on the model. There is substantial experimental evidence that MSMTL-PI markedly enhances the performance of image and text classification tasks, achieving superior classification accuracy with minimal labeled data. Across 15 benchmark datasets, MSMTL-PI consistently outperforms existing methods, achieving an average F1-scores improvement of 1.92% compared to the best baseline, with a maximum gain of 4.17%.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122112"},"PeriodicalIF":8.1,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143695982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive stochastic configuration network based on online active learning for evolving data streams 基于在线主动学习的演化数据流自适应随机配置网络
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-21 DOI: 10.1016/j.ins.2025.122113
Yinan Guo , Jiayang Pu , Jiale He , Botao Jiao , Jianjiao Ji , Shengxiang Yang
{"title":"Adaptive stochastic configuration network based on online active learning for evolving data streams","authors":"Yinan Guo ,&nbsp;Jiayang Pu ,&nbsp;Jiale He ,&nbsp;Botao Jiao ,&nbsp;Jianjiao Ji ,&nbsp;Shengxiang Yang","doi":"10.1016/j.ins.2025.122113","DOIUrl":"10.1016/j.ins.2025.122113","url":null,"abstract":"<div><div>Stochastic Configuration Networks (SCNs) have exhibited significant potential in data mining, owing to their advantages in fast incremental construction and universal approximation capabilities. However, less researches were done on SCNs-based classification models for concept-drifting data streams. The so-called drifts refer to data distributions changing over time that may degrade the classification performance of SCNs trained on historical data. The previous drift adaptation approach is to discard all the hidden nodes of SCNs, and then learn a new model with new instances, in which the valuable historical information cannot be fully utilized. In addition, labeling all newly-arrived instances is time-consuming and impractical. To address these issues, an adaptive stochastic configuration network embedding online active learning is proposed. Crucially, a query strategy is developed to select representative instances for labeling based on the change degree of instances density and their uncertainty. An online update mechanism is employed to incrementally update the network's output parameters instance by instance. To rationally forget the outdated information and learn new concepts, a dynamic adjustment mechanism adaptively adds or prunes nodes in the SCN model. Experimental results for nine datasets confirm that our algorithm outperforms six popular ones on classification accuracy.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122113"},"PeriodicalIF":8.1,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143695981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using AIE-D algorithm to recognize the node importance of weighted urban rail transit network considering passenger flow 利用ae - d算法识别考虑客流的加权城市轨道交通网络节点重要性
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-20 DOI: 10.1016/j.ins.2025.122106
Wencheng Huang , Xingyu Chen , Hongbing Pu , Yanhui Yin
{"title":"Using AIE-D algorithm to recognize the node importance of weighted urban rail transit network considering passenger flow","authors":"Wencheng Huang ,&nbsp;Xingyu Chen ,&nbsp;Hongbing Pu ,&nbsp;Yanhui Yin","doi":"10.1016/j.ins.2025.122106","DOIUrl":"10.1016/j.ins.2025.122106","url":null,"abstract":"<div><div>The AIE-D algorithm (Adjacent Information Entropy-D algorithm) is proposed to recognize the importance of nodes in the urban rail transit network (URTN) weighted by passenger flow, which considers passenger flow, topological characteristics of nodes in the URTN, and the influence of neighboring nodes. The travel impedance is determined by using travel time, the D algorithm is used to search the k-short paths, and the weight value of each edge is the passenger flow cross-section of the corresponding line. Then, the detail AIE calculation steps are introduced. Next, a numerical study and comparison study are conducted by using the weighted topology of network. Compared with other commonly used algorithms, AIE-D has lower time complexity with faster calculation speed, and higher recognition accuracy. Finally, a real-world case study is conducted by using URTN of Chengdu Metro Network as the background. Weighted by passenger flow has greater impact on the operation of urban rail transit. The nodes are categorized into three classes according to the ranking of node importance, which includes Classification VI, Classification I and Classification GI. We conduct random attacks and deliberate attacks on the network, and analyze the network efficiency and maximum connectivity subgraph rate after the attacks.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122106"},"PeriodicalIF":8.1,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A forward k-means algorithm for regression clustering 回归聚类的前向k均值算法
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-20 DOI: 10.1016/j.ins.2025.122105
Jun Lu , Tingjin Luo , Kai Li
{"title":"A forward k-means algorithm for regression clustering","authors":"Jun Lu ,&nbsp;Tingjin Luo ,&nbsp;Kai Li","doi":"10.1016/j.ins.2025.122105","DOIUrl":"10.1016/j.ins.2025.122105","url":null,"abstract":"<div><div>We propose a novel <em>forward k</em>-means algorithm for regression clustering, where the “forward” strategy progressively partitions samples from a single cluster into multiple ones, using the current optimal clustering solutions as initialization for subsequent iterations, thereby ensuring a deterministic result without any initialization requirements. We employ the mean squared error from the fitted clustering results as a criterion to guide partition optimization, which not only ensures rapid convergence of the algorithm to a stable solution but also yields desirable theoretical results. Meanwhile, we also suggest a difference-based threshold ridge ratio criterion to consistently determine the number of clusters. Comprehensive numerical studies are further conducted to demonstrate the algorithm's efficacy.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122105"},"PeriodicalIF":8.1,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traffic forecasting using spatio-temporal dynamics and attention with graph attention PDEs 基于时空动态和注意力的交通预测
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-20 DOI: 10.1016/j.ins.2025.122108
Ghadah Almousa , Yugyung Lee
{"title":"Traffic forecasting using spatio-temporal dynamics and attention with graph attention PDEs","authors":"Ghadah Almousa ,&nbsp;Yugyung Lee","doi":"10.1016/j.ins.2025.122108","DOIUrl":"10.1016/j.ins.2025.122108","url":null,"abstract":"<div><div>Accurate traffic forecasting is vital for optimizing intelligent transportation systems (ITS), yet existing models often struggle to capture the complex spatio-temporal patterns of urban traffic. We present GAPDE (Graph Attention Partial Differential Equation), a novel framework that integrates Partial Differential Equations (PDEs), Graph Convolutional Networks (GCNs), and advanced attention mechanisms. GAPDE enables continuous-time spatio-temporal modeling and dynamically prioritizes critical features through attention-driven traffic forecasting. Experiments on benchmark datasets, including PEMS-BAY, METR-LA, and various PeMS collections (PeMS03, PeMS04, PeMS07, PeMS08, PeMSD7M, and PeMSD7L), demonstrate GAPDE's superior performance over state-of-the-art models such as RGDAN, SGODE-RNN, and STD-MAE. GAPDE achieves up to 9.2 percent lower RMSE and 10.4 percent lower MAE, outperforming baselines in both short- and long-term prediction tasks. It demonstrates strong robustness to missing data, high scalability for large-scale networks, and enhanced interpretability through spatial and temporal attention visualizations. Comprehensive comparative evaluations and an in-depth ablation study further validate the effectiveness of GAPDE's components, including the GPDE block and spatio-temporal attention mechanisms. By combining PDEs, GCNs, and attention mechanisms in a scalable and efficient design, GAPDE offers a robust solution for real-time traffic forecasting in complex urban environments.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122108"},"PeriodicalIF":8.1,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Batch active learning for time-series classification with multi-mode exploration 基于多模式探索的批量主动学习时序分类
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-20 DOI: 10.1016/j.ins.2025.122109
Sangho Lee , Chihyeon Choi , Hyungrok Do , Youngdoo Son
{"title":"Batch active learning for time-series classification with multi-mode exploration","authors":"Sangho Lee ,&nbsp;Chihyeon Choi ,&nbsp;Hyungrok Do ,&nbsp;Youngdoo Son","doi":"10.1016/j.ins.2025.122109","DOIUrl":"10.1016/j.ins.2025.122109","url":null,"abstract":"<div><div>Collecting a sufficient amount of labeled data is challenging in practice. To deal with this challenge, active learning, which selects informative instances for annotation, has been studied. However, for time series, the dataset quality is often quite poor, and its multi-modality makes it unsuited to conventional active learning methods. Existing time series active learning methods have limitations, such as redundancy among selected instances, unrealistic assumptions on datasets, and inefficient calculations. We propose a batch active learning method for time series (BALT), which efficiently selects a batch of informative samples. BALT performs efficient clustering and picks one instance with the maximum informativeness score from each cluster. Using this score, we consider in-batch diversity explicitly so as to effectively handle multi-modality by exploring unknown regions, even under an extreme lack of labeled data. We also apply an adaptive weighting strategy to emphasize exploration in the early stage of the algorithm but shift to exploitation as the algorithm proceeds. Through experiments on several time-series datasets under various scenarios, we demonstrate the efficacy of BALT in achieving superior classification performance with less computation time under a predetermined budget, compared to existing time-series active learning methods.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122109"},"PeriodicalIF":8.1,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143695979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refined Kolmogorov complexity of analog, evolving and stochastic recurrent neural networks 模拟、进化和随机递归神经网络的改进Kolmogorov复杂度
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-19 DOI: 10.1016/j.ins.2025.122104
Jérémie Cabessa , Yann Strozecki
{"title":"Refined Kolmogorov complexity of analog, evolving and stochastic recurrent neural networks","authors":"Jérémie Cabessa ,&nbsp;Yann Strozecki","doi":"10.1016/j.ins.2025.122104","DOIUrl":"10.1016/j.ins.2025.122104","url":null,"abstract":"<div><div>Kolmogorov complexity measures the compressibility of real numbers. We provide a refined characterization of the hypercomputational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. First, we retrieve the infinite hierarchy of complexity classes of analog networks, defined in terms of the Kolmogorov complexity of their real weights. This hierarchy lies between the complexity classes <strong>P</strong> and <span><math><mi>P</mi><mo>/</mo><mi>poly</mi></math></span>. Next, using a natural identification between real numbers and infinite sequences of bits, we generalize this result to evolving networks, obtaining a similar hierarchy of complexity classes within the same bounds. Finally, we extend these results to stochastic networks that employ real probabilities as randomness, deriving a new infinite hierarchy of complexity classes situated between <strong>BPP</strong> and <span><math><mi>BPP</mi><mo>/</mo><mi>lo</mi><msup><mrow><mi>g</mi></mrow><mrow><mo>⁎</mo></mrow></msup></math></span>. Beyond providing examples of such hierarchies, we describe a generic method for constructing them based on classes of functions of increasing complexity. As a practical application, we show that the predictive capabilities of recurrent neural networks are strongly impacted by the quantization applied to their weights. Overall, these results highlight the relationship between the computational power of neural networks and the intrinsic information contained by their parameters.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122104"},"PeriodicalIF":8.1,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The incremental SMOTE: A new approach based on the incremental k-means algorithm for solving imbalanced data set problem 增量SMOTE:一种基于增量k-means算法的解决不平衡数据集问题的新方法
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-19 DOI: 10.1016/j.ins.2025.122103
Duygu Selin Turan, Burak Ordin
{"title":"The incremental SMOTE: A new approach based on the incremental k-means algorithm for solving imbalanced data set problem","authors":"Duygu Selin Turan,&nbsp;Burak Ordin","doi":"10.1016/j.ins.2025.122103","DOIUrl":"10.1016/j.ins.2025.122103","url":null,"abstract":"<div><div>Classification is one of the very important areas in data mining. In real-life problems, developed methods for modeling with the classification problem generally perform well on datasets where the class distribution is balanced. On the other hand, the data sets are often imbalanced and it is important to develop algorithms to solve the classification problem on imbalanced data sets. Imbalanced datasets are more difficult to classify than balanced datasets because learning a class with underrepresentation is difficult. Most real life problems are imbalanced. The class with the least number of data usually corresponds to rare cases and is more important. Learning these classes is critical accordingly. One of the most commonly used solution methods to solve this problem is to oversample the minor class. When oversampling, too many repetitions in the dataset can cause overfitting. For this reason, it is very important to ensure data diversity when oversampling. Therefore, this paper proposes a new oversampling methods (the incremental SMOTE) combining the incremental k-means algorithm and Synthetic minority oversampling technique (SMOTE). The original dataset is clustered with the incremental k-means algorithm and the clusters are filtered to determine the safe clusters. The number of points to be produced from the safe clusters is determined, and then new instances are produced with the improved SMOTE algorithm. In the incremental SMOTE, diversity in the dataset is achieved by generating with incremental rate. In order to evaluate the performance of the incremental SMOTE algorithm, classification was performed on imbalanced datasets, balanced datasets obtained by the random oversampling, SMOTE, Borderline-SMOTE and SVM SMOTE methods. Comparisons for 10 datasets showed that the performance of the proposed method improves as the imbalance ratio of the dataset increases.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"711 ","pages":"Article 122103"},"PeriodicalIF":8.1,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Partial multi-label learning with label and classifier correlations 具有标签和分类器相关性的部分多标签学习
IF 8.1 1区 计算机科学
Information Sciences Pub Date : 2025-03-19 DOI: 10.1016/j.ins.2025.122101
Ke Wang , Yahu Guan , Yunyu Xie , Zhaohong Jia , Hong Ye , Zhangling Duan , Dong Liang
{"title":"Partial multi-label learning with label and classifier correlations","authors":"Ke Wang ,&nbsp;Yahu Guan ,&nbsp;Yunyu Xie ,&nbsp;Zhaohong Jia ,&nbsp;Hong Ye ,&nbsp;Zhangling Duan ,&nbsp;Dong Liang","doi":"10.1016/j.ins.2025.122101","DOIUrl":"10.1016/j.ins.2025.122101","url":null,"abstract":"<div><div>In partial multi-label learning (PML), each instance is associated with a set of candidate labels, which contains multiple relevant labels and noisy labels. The disambiguation-based strategy has been widely adopted by most existing PML methods, i.e., recovering the information of real labels from the set of candidate labels. To achieve this goal, these methods usually assume that global label correlations among different categories are applicable to all the instances, but local label correlations are seldom considered. In this paper, we propose a novel PML method to address this issue, termed Partial Multi-Label Learning with Label and Classifier Correlations (PML-LC), where both global and local label correlations are taken into consideration. Specifically, the Minimum Spanning Tree (MST) technique is employed to obtain the global manifold structure information of the feature space, which is then transformed into the label space, acting as global label correlations. Moreover, a local label manifold regularizer is introduced to capture local label correlations. Besides, a covariance regularizer is also adopted to model classifier correlations when learning the mapping matrix. Experimental results on thirteen PML datasets demonstrate its superior performance over several state-of-the-art PML approaches.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"712 ","pages":"Article 122101"},"PeriodicalIF":8.1,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143724335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信