最小知情线性判别分析:用未标记数据训练LDA模型

IF 3.6 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Nicolas Heintz , Tom Francart , Alexander Bertrand
{"title":"最小知情线性判别分析:用未标记数据训练LDA模型","authors":"Nicolas Heintz ,&nbsp;Tom Francart ,&nbsp;Alexander Bertrand","doi":"10.1016/j.sigpro.2025.110226","DOIUrl":null,"url":null,"abstract":"<div><div>Linear Discriminant Analysis (LDA) is one of the oldest and most popular linear methods for supervised classification problems. Computing the optimal LDA projection vector requires calculating the average and covariance of the feature vectors of each class individually, which necessitates class labels to estimate these statistics from the data. In this paper we demonstrate that, if some minor prior information is available, it is possible to compute the exact projection vector from LDA models based on unlabelled data. More precisely, we show that either one of the following three pieces of information is sufficient to compute the LDA projection vector if only unlabelled data are available: (1) the class average of one of the two classes, (2) the difference between both class averages (up to a scaling), or (3) the class covariance matrices (up to a scaling). These theoretical results are validated in numerical experiments, demonstrating that this minimally informed Linear Discriminant Analysis (MILDA) model closely approximates the solution of a supervised LDA model, even on high-dimensional, poorly separated or extremely imbalanced data. Furthermore, we show that the MILDA projection vector can be computed in a closed form with a computational cost comparable to LDA and is able to quickly adapt to non-stationary data, making it well-suited to use as an adaptive classifier that is continuously retrained on (unlabelled) streaming data.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"239 ","pages":"Article 110226"},"PeriodicalIF":3.6000,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Minimally informed linear discriminant analysis: Training an LDA model with unlabelled data\",\"authors\":\"Nicolas Heintz ,&nbsp;Tom Francart ,&nbsp;Alexander Bertrand\",\"doi\":\"10.1016/j.sigpro.2025.110226\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Linear Discriminant Analysis (LDA) is one of the oldest and most popular linear methods for supervised classification problems. Computing the optimal LDA projection vector requires calculating the average and covariance of the feature vectors of each class individually, which necessitates class labels to estimate these statistics from the data. In this paper we demonstrate that, if some minor prior information is available, it is possible to compute the exact projection vector from LDA models based on unlabelled data. More precisely, we show that either one of the following three pieces of information is sufficient to compute the LDA projection vector if only unlabelled data are available: (1) the class average of one of the two classes, (2) the difference between both class averages (up to a scaling), or (3) the class covariance matrices (up to a scaling). These theoretical results are validated in numerical experiments, demonstrating that this minimally informed Linear Discriminant Analysis (MILDA) model closely approximates the solution of a supervised LDA model, even on high-dimensional, poorly separated or extremely imbalanced data. Furthermore, we show that the MILDA projection vector can be computed in a closed form with a computational cost comparable to LDA and is able to quickly adapt to non-stationary data, making it well-suited to use as an adaptive classifier that is continuously retrained on (unlabelled) streaming data.</div></div>\",\"PeriodicalId\":49523,\"journal\":{\"name\":\"Signal Processing\",\"volume\":\"239 \",\"pages\":\"Article 110226\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0165168425003408\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165168425003408","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

线性判别分析(LDA)是研究监督分类问题最古老、最流行的线性方法之一。计算最优LDA投影向量需要分别计算每个类的特征向量的平均值和协方差,这就需要类标签从数据中估计这些统计量。在本文中,我们证明,如果一些次要的先验信息是可用的,它是可能的计算精确的投影向量从LDA模型基于未标记的数据。更准确地说,我们表明,如果只有未标记的数据可用,以下三个信息中的任何一个都足以计算LDA投影向量:(1)两个类之一的类平均值,(2)两个类平均值之间的差(缩放),或(3)类协方差矩阵(缩放)。这些理论结果在数值实验中得到验证,表明这种最小知情线性判别分析(MILDA)模型非常接近监督LDA模型的解,即使在高维,分离不良或极度不平衡的数据上也是如此。此外,我们表明,MILDA投影向量可以以封闭形式计算,其计算成本与LDA相当,并且能够快速适应非平稳数据,使其非常适合用作在(未标记)流数据上连续再训练的自适应分类器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Minimally informed linear discriminant analysis: Training an LDA model with unlabelled data
Linear Discriminant Analysis (LDA) is one of the oldest and most popular linear methods for supervised classification problems. Computing the optimal LDA projection vector requires calculating the average and covariance of the feature vectors of each class individually, which necessitates class labels to estimate these statistics from the data. In this paper we demonstrate that, if some minor prior information is available, it is possible to compute the exact projection vector from LDA models based on unlabelled data. More precisely, we show that either one of the following three pieces of information is sufficient to compute the LDA projection vector if only unlabelled data are available: (1) the class average of one of the two classes, (2) the difference between both class averages (up to a scaling), or (3) the class covariance matrices (up to a scaling). These theoretical results are validated in numerical experiments, demonstrating that this minimally informed Linear Discriminant Analysis (MILDA) model closely approximates the solution of a supervised LDA model, even on high-dimensional, poorly separated or extremely imbalanced data. Furthermore, we show that the MILDA projection vector can be computed in a closed form with a computational cost comparable to LDA and is able to quickly adapt to non-stationary data, making it well-suited to use as an adaptive classifier that is continuously retrained on (unlabelled) streaming data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Signal Processing
Signal Processing 工程技术-工程:电子与电气
CiteScore
9.20
自引率
9.10%
发文量
309
审稿时长
41 days
期刊介绍: Signal Processing incorporates all aspects of the theory and practice of signal processing. It features original research work, tutorial and review articles, and accounts of practical developments. It is intended for a rapid dissemination of knowledge and experience to engineers and scientists working in the research, development or practical application of signal processing. Subject areas covered by the journal include: Signal Theory; Stochastic Processes; Detection and Estimation; Spectral Analysis; Filtering; Signal Processing Systems; Software Developments; Image Processing; Pattern Recognition; Optical Signal Processing; Digital Signal Processing; Multi-dimensional Signal Processing; Communication Signal Processing; Biomedical Signal Processing; Geophysical and Astrophysical Signal Processing; Earth Resources Signal Processing; Acoustic and Vibration Signal Processing; Data Processing; Remote Sensing; Signal Processing Technology; Radar Signal Processing; Sonar Signal Processing; Industrial Applications; New Applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信