多模态跨域对比学习:视觉感知的自监督生成和几何框架

IF 8.1 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS
S. Muhammad Ahmed Hassan Shah , Atif Rizwan , Muhammad Sardaraz , Muhammad Tahir , Nagwan Abdel Samee , Mona M. Jamjoom
{"title":"多模态跨域对比学习:视觉感知的自监督生成和几何框架","authors":"S. Muhammad Ahmed Hassan Shah ,&nbsp;Atif Rizwan ,&nbsp;Muhammad Sardaraz ,&nbsp;Muhammad Tahir ,&nbsp;Nagwan Abdel Samee ,&nbsp;Mona M. Jamjoom","doi":"10.1016/j.ins.2025.122239","DOIUrl":null,"url":null,"abstract":"<div><div>Self-Supervised Contrastive Representation Learning (SSCRL) has gained significant attention for its ability to learn meaningful representations from unlabeled data by leveraging contrastive learning principles. However, existing SSCRL approaches struggle with effectively handling heterogeneous data formats, particularly discrete and binary representations, limiting adaptability across multiple domains. This limitation hinders the generalization of learned representations, especially in applications requiring structured feature encoding and robust cross-domain adaptability. To address this, we propose the Modular QCB Learner, a novel algorithm designed to enhance representation learning for heterogeneous data types. This framework builds upon SSCRL by incorporating a Real Non-Volume Preserving transformation to optimize continuous representations, ensuring alignment with a Gaussian distribution. For discrete representation learning, vector quantization is utilized along with a Poisson distribution, while binary representations are modeled through nonlinear transformations and the Bernoulli distribution. Multi-Domain Mixture Optimization (MiDO) is introduced to facilitate joint optimization of different representation types by integrating multiple loss functions. To evaluate effectiveness, synthetic data generation is performed on extracted representations and compared with baselines. Experiments on CIFAR-10 confirm the Modular QCB Learner improves representation quality, demonstrating robustness across diverse data domains with applications in synthetic data generation, anomaly detection and multimodal learning.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"715 ","pages":"Article 122239"},"PeriodicalIF":8.1000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal cross-domain contrastive learning: A self-supervised generative and geometric framework for visual perception\",\"authors\":\"S. Muhammad Ahmed Hassan Shah ,&nbsp;Atif Rizwan ,&nbsp;Muhammad Sardaraz ,&nbsp;Muhammad Tahir ,&nbsp;Nagwan Abdel Samee ,&nbsp;Mona M. Jamjoom\",\"doi\":\"10.1016/j.ins.2025.122239\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Self-Supervised Contrastive Representation Learning (SSCRL) has gained significant attention for its ability to learn meaningful representations from unlabeled data by leveraging contrastive learning principles. However, existing SSCRL approaches struggle with effectively handling heterogeneous data formats, particularly discrete and binary representations, limiting adaptability across multiple domains. This limitation hinders the generalization of learned representations, especially in applications requiring structured feature encoding and robust cross-domain adaptability. To address this, we propose the Modular QCB Learner, a novel algorithm designed to enhance representation learning for heterogeneous data types. This framework builds upon SSCRL by incorporating a Real Non-Volume Preserving transformation to optimize continuous representations, ensuring alignment with a Gaussian distribution. For discrete representation learning, vector quantization is utilized along with a Poisson distribution, while binary representations are modeled through nonlinear transformations and the Bernoulli distribution. Multi-Domain Mixture Optimization (MiDO) is introduced to facilitate joint optimization of different representation types by integrating multiple loss functions. To evaluate effectiveness, synthetic data generation is performed on extracted representations and compared with baselines. Experiments on CIFAR-10 confirm the Modular QCB Learner improves representation quality, demonstrating robustness across diverse data domains with applications in synthetic data generation, anomaly detection and multimodal learning.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"715 \",\"pages\":\"Article 122239\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0020025525003718\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020025525003718","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

自监督对比表征学习(SSCRL)因其利用对比学习原理从未标记数据中学习有意义表征的能力而受到广泛关注。然而,现有的SSCRL方法难以有效地处理异构数据格式,特别是离散和二进制表示,限制了跨多个领域的适应性。这种限制阻碍了学习表征的泛化,特别是在需要结构化特征编码和强大的跨域适应性的应用中。为了解决这个问题,我们提出了模块化QCB学习者,这是一种新的算法,旨在增强异构数据类型的表示学习。该框架建立在SSCRL的基础上,通过结合Real非体积保留变换来优化连续表示,确保与高斯分布对齐。对于离散表示学习,矢量量化与泊松分布一起使用,而二进制表示通过非线性变换和伯努利分布建模。引入多域混合优化(MiDO),通过对多个损失函数进行积分,实现不同表示类型的联合优化。为了评估有效性,对提取的表示进行合成数据生成,并与基线进行比较。在CIFAR-10上的实验证实,模块化QCB学习器提高了表征质量,在合成数据生成、异常检测和多模态学习等不同数据领域展示了鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multimodal cross-domain contrastive learning: A self-supervised generative and geometric framework for visual perception
Self-Supervised Contrastive Representation Learning (SSCRL) has gained significant attention for its ability to learn meaningful representations from unlabeled data by leveraging contrastive learning principles. However, existing SSCRL approaches struggle with effectively handling heterogeneous data formats, particularly discrete and binary representations, limiting adaptability across multiple domains. This limitation hinders the generalization of learned representations, especially in applications requiring structured feature encoding and robust cross-domain adaptability. To address this, we propose the Modular QCB Learner, a novel algorithm designed to enhance representation learning for heterogeneous data types. This framework builds upon SSCRL by incorporating a Real Non-Volume Preserving transformation to optimize continuous representations, ensuring alignment with a Gaussian distribution. For discrete representation learning, vector quantization is utilized along with a Poisson distribution, while binary representations are modeled through nonlinear transformations and the Bernoulli distribution. Multi-Domain Mixture Optimization (MiDO) is introduced to facilitate joint optimization of different representation types by integrating multiple loss functions. To evaluate effectiveness, synthetic data generation is performed on extracted representations and compared with baselines. Experiments on CIFAR-10 confirm the Modular QCB Learner improves representation quality, demonstrating robustness across diverse data domains with applications in synthetic data generation, anomaly detection and multimodal learning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Sciences
Information Sciences 工程技术-计算机:信息系统
CiteScore
14.00
自引率
17.30%
发文量
1322
审稿时长
10.4 months
期刊介绍: Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions. Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信