多模态元宇宙医疗:生成式人工智能驱动诊断的协作表示和自适应融合方法。

IF 11 1区 综合性期刊 Q1 Multidisciplinary
Research Pub Date : 2025-03-12 eCollection Date: 2025-01-01 DOI:10.34133/research.0616
Jianhui Lv, Adam Slowik, Shalli Rani, Byung-Gyu Kim, Chien-Ming Chen, Saru Kumari, Keqin Li, Xiaohong Lyu, Huamao Jiang
{"title":"多模态元宇宙医疗:生成式人工智能驱动诊断的协作表示和自适应融合方法。","authors":"Jianhui Lv, Adam Slowik, Shalli Rani, Byung-Gyu Kim, Chien-Ming Chen, Saru Kumari, Keqin Li, Xiaohong Lyu, Huamao Jiang","doi":"10.34133/research.0616","DOIUrl":null,"url":null,"abstract":"<p><p>The metaverse enables immersive virtual healthcare environments, presenting opportunities for enhanced care delivery. A key challenge lies in effectively combining multimodal healthcare data and generative artificial intelligence abilities within metaverse-based healthcare applications, which is a problem that needs to be addressed. This paper proposes a novel multimodal learning framework for metaverse healthcare, MMLMH, based on collaborative intra- and intersample representation and adaptive fusion. Our framework introduces a collaborative representation learning approach that captures shared and modality-specific features across text, audio, and visual health data. By combining modality-specific and shared encoders with carefully formulated intrasample and intersample collaboration mechanisms, MMLMH achieves superior feature representation for complex health assessments. The framework's adaptive fusion approach, utilizing attention mechanisms and gated neural networks, demonstrates robust performance across varying noise levels and data quality conditions. Experiments on metaverse healthcare datasets demonstrate MMLMH's superior performance over baseline methods across multiple evaluation metrics. Longitudinal studies and visualization further illustrate MMLMH's adaptability to evolving virtual environments and balanced performance across diagnostic accuracy, patient-system interaction efficacy, and data integration complexity. The proposed framework has a unique advantage in that a similar level of performance is maintained across various patient populations and virtual avatars, which could lead to greater personalization of healthcare experiences in the metaverse. MMLMH's successful functioning in such complicated circumstances suggests that it can combine and process information streams from several sources. They can be successfully utilized in next-generation healthcare delivery through virtual reality.</p>","PeriodicalId":21120,"journal":{"name":"Research","volume":"8 ","pages":"0616"},"PeriodicalIF":11.0000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11899152/pdf/","citationCount":"0","resultStr":"{\"title\":\"Multimodal Metaverse Healthcare: A Collaborative Representation and Adaptive Fusion Approach for Generative Artificial-Intelligence-Driven Diagnosis.\",\"authors\":\"Jianhui Lv, Adam Slowik, Shalli Rani, Byung-Gyu Kim, Chien-Ming Chen, Saru Kumari, Keqin Li, Xiaohong Lyu, Huamao Jiang\",\"doi\":\"10.34133/research.0616\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The metaverse enables immersive virtual healthcare environments, presenting opportunities for enhanced care delivery. A key challenge lies in effectively combining multimodal healthcare data and generative artificial intelligence abilities within metaverse-based healthcare applications, which is a problem that needs to be addressed. This paper proposes a novel multimodal learning framework for metaverse healthcare, MMLMH, based on collaborative intra- and intersample representation and adaptive fusion. Our framework introduces a collaborative representation learning approach that captures shared and modality-specific features across text, audio, and visual health data. By combining modality-specific and shared encoders with carefully formulated intrasample and intersample collaboration mechanisms, MMLMH achieves superior feature representation for complex health assessments. The framework's adaptive fusion approach, utilizing attention mechanisms and gated neural networks, demonstrates robust performance across varying noise levels and data quality conditions. Experiments on metaverse healthcare datasets demonstrate MMLMH's superior performance over baseline methods across multiple evaluation metrics. Longitudinal studies and visualization further illustrate MMLMH's adaptability to evolving virtual environments and balanced performance across diagnostic accuracy, patient-system interaction efficacy, and data integration complexity. The proposed framework has a unique advantage in that a similar level of performance is maintained across various patient populations and virtual avatars, which could lead to greater personalization of healthcare experiences in the metaverse. MMLMH's successful functioning in such complicated circumstances suggests that it can combine and process information streams from several sources. They can be successfully utilized in next-generation healthcare delivery through virtual reality.</p>\",\"PeriodicalId\":21120,\"journal\":{\"name\":\"Research\",\"volume\":\"8 \",\"pages\":\"0616\"},\"PeriodicalIF\":11.0000,\"publicationDate\":\"2025-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11899152/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.34133/research.0616\",\"RegionNum\":1,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"Multidisciplinary\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.34133/research.0616","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0

摘要

metaverse支持沉浸式虚拟医疗保健环境,为增强医疗服务提供了机会。一个关键的挑战在于在基于元数据的医疗保健应用程序中有效地结合多模式医疗保健数据和生成人工智能能力,这是一个需要解决的问题。本文提出了一种基于样本内和样本间协作表示和自适应融合的元医疗多模态学习框架——MMLMH。我们的框架引入了一种协作表示学习方法,该方法可以捕获文本、音频和视觉健康数据中的共享和特定于模式的功能。通过将特定模式和共享编码器与精心制定的样本内和样本间协作机制相结合,MMLMH实现了复杂健康评估的卓越特征表示。该框架的自适应融合方法利用注意机制和门控神经网络,在不同的噪声水平和数据质量条件下表现出稳健的性能。在元宇宙医疗保健数据集上的实验表明,MMLMH在多个评估指标上优于基线方法。纵向研究和可视化进一步说明了MMLMH对不断发展的虚拟环境的适应性,以及在诊断准确性、患者-系统交互效率和数据集成复杂性方面的平衡性能。所建议的框架具有独特的优势,因为在不同的患者群体和虚拟化身之间保持了相似的性能水平,这可能导致在虚拟世界中实现更大的医疗保健体验个性化。MMLMH在如此复杂的环境下的成功运作表明,它可以组合和处理来自多个来源的信息流。它们可以通过虚拟现实成功地用于下一代医疗保健服务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multimodal Metaverse Healthcare: A Collaborative Representation and Adaptive Fusion Approach for Generative Artificial-Intelligence-Driven Diagnosis.

The metaverse enables immersive virtual healthcare environments, presenting opportunities for enhanced care delivery. A key challenge lies in effectively combining multimodal healthcare data and generative artificial intelligence abilities within metaverse-based healthcare applications, which is a problem that needs to be addressed. This paper proposes a novel multimodal learning framework for metaverse healthcare, MMLMH, based on collaborative intra- and intersample representation and adaptive fusion. Our framework introduces a collaborative representation learning approach that captures shared and modality-specific features across text, audio, and visual health data. By combining modality-specific and shared encoders with carefully formulated intrasample and intersample collaboration mechanisms, MMLMH achieves superior feature representation for complex health assessments. The framework's adaptive fusion approach, utilizing attention mechanisms and gated neural networks, demonstrates robust performance across varying noise levels and data quality conditions. Experiments on metaverse healthcare datasets demonstrate MMLMH's superior performance over baseline methods across multiple evaluation metrics. Longitudinal studies and visualization further illustrate MMLMH's adaptability to evolving virtual environments and balanced performance across diagnostic accuracy, patient-system interaction efficacy, and data integration complexity. The proposed framework has a unique advantage in that a similar level of performance is maintained across various patient populations and virtual avatars, which could lead to greater personalization of healthcare experiences in the metaverse. MMLMH's successful functioning in such complicated circumstances suggests that it can combine and process information streams from several sources. They can be successfully utilized in next-generation healthcare delivery through virtual reality.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Research
Research Multidisciplinary-Multidisciplinary
CiteScore
13.40
自引率
3.60%
发文量
0
审稿时长
14 weeks
期刊介绍: Research serves as a global platform for academic exchange, collaboration, and technological advancements. This journal welcomes high-quality research contributions from any domain, with open arms to authors from around the globe. Comprising fundamental research in the life and physical sciences, Research also highlights significant findings and issues in engineering and applied science. The journal proudly features original research articles, reviews, perspectives, and editorials, fostering a diverse and dynamic scholarly environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信