Zhenchao Tang , Guanxing Chen , Shouzhi Chen , Haohuai He , Jiehui Huang , Tiejun Dong , Jun Zhou , Lu Zhao , Linlin You , Calvin Yu-Chian Chen
{"title":"Modal-NexT: Towards unified heterogeneous cellular data integration","authors":"Zhenchao Tang , Guanxing Chen , Shouzhi Chen , Haohuai He , Jiehui Huang , Tiejun Dong , Jun Zhou , Lu Zhao , Linlin You , Calvin Yu-Chian Chen","doi":"10.1016/j.inffus.2025.103479","DOIUrl":null,"url":null,"abstract":"<div><div>Unified integration of heterogeneous cellular data is the foundation for building artificial intelligence virtual cells (AIVCs). Although current Artificial Intelligence (AI) integration techniques are emerging in an endless stream, they are always isolated in different scenarios. To promote the development of the high-resolution biological data analysis field towards a multi-modal unified model, we propose <strong>Modal</strong>-<strong>Nex</strong>us <strong>T</strong>ransductive learning (Modal-NexT), a unified and efficient integration paradigm. Modal-NexT summarizes four scenarios: paired multi-modal integration, unpaired multi-modal integration, spatial multi-modal integration, and multi-source integration. Modal-NexT utilizes transductive learning to capture the biological context on a unified cell-feature joint graph. We have collected benchmark-ready datasets for the four integration scenarios and established comprehensive integration benchmarks. Evaluations on these datasets have verified the accuracy and robustness of Modal-NexT in cellular data integration.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"125 ","pages":"Article 103479"},"PeriodicalIF":14.7000,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525005524","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Unified integration of heterogeneous cellular data is the foundation for building artificial intelligence virtual cells (AIVCs). Although current Artificial Intelligence (AI) integration techniques are emerging in an endless stream, they are always isolated in different scenarios. To promote the development of the high-resolution biological data analysis field towards a multi-modal unified model, we propose Modal-Nexus Transductive learning (Modal-NexT), a unified and efficient integration paradigm. Modal-NexT summarizes four scenarios: paired multi-modal integration, unpaired multi-modal integration, spatial multi-modal integration, and multi-source integration. Modal-NexT utilizes transductive learning to capture the biological context on a unified cell-feature joint graph. We have collected benchmark-ready datasets for the four integration scenarios and established comprehensive integration benchmarks. Evaluations on these datasets have verified the accuracy and robustness of Modal-NexT in cellular data integration.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.