考虑复杂分布的无源域自适应时间序列数据

IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jing Shang, Zunming Chen, Zhiwen Xiao, Zhihui Wu, Yifei Zhang, Jibing Wang
{"title":"考虑复杂分布的无源域自适应时间序列数据","authors":"Jing Shang,&nbsp;Zunming Chen,&nbsp;Zhiwen Xiao,&nbsp;Zhihui Wu,&nbsp;Yifei Zhang,&nbsp;Jibing Wang","doi":"10.1016/j.datak.2025.102501","DOIUrl":null,"url":null,"abstract":"<div><div>Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained model from a labeled source domain to an unlabeled target domain without accessing source domain data, thereby protecting source domain privacy. Although SFDA has recently been applied to time series data, the inherent complex distribution characteristics including temporal variability and distributional diversity of such data remain underexplored. Time series data exhibit significant dynamic variability influenced by collection environments, leading to discrepancies between sequences. Additionally, multidimensional time series data face distributional diversity across dimensions. These complex characteristics increase the learning difficulty for source models and widen the adaptation gap between the source and target domains. To address these challenges, this paper proposes a novel SFDA method for time series data, named Adaptive Latent Subdomain feature extraction and joint Prediction (ALSP). The method divides the source domain, which has a complex distribution, into multiple latent subdomains with relatively simple distributions, thereby effectively capturing the features of different subdistributions. It extracts latent domain-specific and domain-invariant features to identify subdomain-specific characteristics. Furthermore, it combines domain-specific classifiers and a domain-invariant classifier to enhance model performance through multi-classifier joint prediction. During target domain adaptation, ALSP reduces domain dependence by extracting invariant features, thereby narrowing the distributional gap between the source and target domains. Simultaneously, it leverages prior knowledge from the source domain distribution to support the hypothesis space and dynamically adapt to the target domain. Experiments on three real-world datasets demonstrate that ALSP achieves superior performance in cross-domain time series classification tasks, significantly outperforming existing methods.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102501"},"PeriodicalIF":2.7000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Source-Free Domain Adaptation with complex distribution considerations for time series data\",\"authors\":\"Jing Shang,&nbsp;Zunming Chen,&nbsp;Zhiwen Xiao,&nbsp;Zhihui Wu,&nbsp;Yifei Zhang,&nbsp;Jibing Wang\",\"doi\":\"10.1016/j.datak.2025.102501\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained model from a labeled source domain to an unlabeled target domain without accessing source domain data, thereby protecting source domain privacy. Although SFDA has recently been applied to time series data, the inherent complex distribution characteristics including temporal variability and distributional diversity of such data remain underexplored. Time series data exhibit significant dynamic variability influenced by collection environments, leading to discrepancies between sequences. Additionally, multidimensional time series data face distributional diversity across dimensions. These complex characteristics increase the learning difficulty for source models and widen the adaptation gap between the source and target domains. To address these challenges, this paper proposes a novel SFDA method for time series data, named Adaptive Latent Subdomain feature extraction and joint Prediction (ALSP). The method divides the source domain, which has a complex distribution, into multiple latent subdomains with relatively simple distributions, thereby effectively capturing the features of different subdistributions. It extracts latent domain-specific and domain-invariant features to identify subdomain-specific characteristics. Furthermore, it combines domain-specific classifiers and a domain-invariant classifier to enhance model performance through multi-classifier joint prediction. During target domain adaptation, ALSP reduces domain dependence by extracting invariant features, thereby narrowing the distributional gap between the source and target domains. Simultaneously, it leverages prior knowledge from the source domain distribution to support the hypothesis space and dynamically adapt to the target domain. Experiments on three real-world datasets demonstrate that ALSP achieves superior performance in cross-domain time series classification tasks, significantly outperforming existing methods.</div></div>\",\"PeriodicalId\":55184,\"journal\":{\"name\":\"Data & Knowledge Engineering\",\"volume\":\"161 \",\"pages\":\"Article 102501\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data & Knowledge Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0169023X25000965\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data & Knowledge Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169023X25000965","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

无源域自适应(source - free Domain Adaptation, SFDA)是在不访问源域数据的情况下,将预训练好的模型从标记的源域适应到未标记的目标域,从而保护源域的隐私。虽然SFDA最近被应用于时间序列数据,但这些数据固有的复杂分布特征,包括时间变异性和分布多样性,仍未得到充分的探索。时间序列数据受采集环境的影响表现出显著的动态变异性,从而导致序列之间的差异。此外,多维时间序列数据还面临着跨维度分布的多样性。这些复杂的特征增加了源模型的学习难度,扩大了源和目标域之间的适应差距。为了解决这些挑战,本文提出了一种新的时间序列数据的SFDA方法,称为自适应潜在子域特征提取和联合预测(ALSP)。该方法将分布复杂的源域划分为多个分布相对简单的潜在子域,从而有效捕获不同子分布的特征。它提取潜在的领域特定特征和领域不变特征来识别子领域特定特征。此外,该方法结合了领域特定分类器和领域不变分类器,通过多分类器联合预测来提高模型性能。在目标域自适应过程中,ALSP通过提取不变特征来减少域依赖性,从而缩小源域与目标域之间的分布差距。同时,利用源域分布的先验知识支持假设空间,动态适应目标域。在三个真实数据集上的实验表明,ALSP在跨域时间序列分类任务中取得了优异的性能,显著优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Source-Free Domain Adaptation with complex distribution considerations for time series data
Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained model from a labeled source domain to an unlabeled target domain without accessing source domain data, thereby protecting source domain privacy. Although SFDA has recently been applied to time series data, the inherent complex distribution characteristics including temporal variability and distributional diversity of such data remain underexplored. Time series data exhibit significant dynamic variability influenced by collection environments, leading to discrepancies between sequences. Additionally, multidimensional time series data face distributional diversity across dimensions. These complex characteristics increase the learning difficulty for source models and widen the adaptation gap between the source and target domains. To address these challenges, this paper proposes a novel SFDA method for time series data, named Adaptive Latent Subdomain feature extraction and joint Prediction (ALSP). The method divides the source domain, which has a complex distribution, into multiple latent subdomains with relatively simple distributions, thereby effectively capturing the features of different subdistributions. It extracts latent domain-specific and domain-invariant features to identify subdomain-specific characteristics. Furthermore, it combines domain-specific classifiers and a domain-invariant classifier to enhance model performance through multi-classifier joint prediction. During target domain adaptation, ALSP reduces domain dependence by extracting invariant features, thereby narrowing the distributional gap between the source and target domains. Simultaneously, it leverages prior knowledge from the source domain distribution to support the hypothesis space and dynamically adapt to the target domain. Experiments on three real-world datasets demonstrate that ALSP achieves superior performance in cross-domain time series classification tasks, significantly outperforming existing methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Data & Knowledge Engineering
Data & Knowledge Engineering 工程技术-计算机:人工智能
CiteScore
5.00
自引率
0.00%
发文量
66
审稿时长
6 months
期刊介绍: Data & Knowledge Engineering (DKE) stimulates the exchange of ideas and interaction between these two related fields of interest. DKE reaches a world-wide audience of researchers, designers, managers and users. The major aim of the journal is to identify, investigate and analyze the underlying principles in the design and effective use of these systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信