{"title":"VisualRWKV-HM: Enhancing linear visual-language models via hybrid mixing","authors":"Haowen Hou , Fei Ma , Zihang Li, Fei Richard Yu","doi":"10.1016/j.inffus.2025.103336","DOIUrl":null,"url":null,"abstract":"<div><div>With the success of Large Language Models, Visual Language Models (VLMs) have also developed rapidly. However, existing VLMs often face limitations due to their quadratic time and space complexity, which poses challenges for training and deployment. Linear VLMs have emerged as a solution, providing linear time and space complexity, along with advantages in training and deployment. Nevertheless, a performance gap remains compared to state-of-the-art (SOTA) VLMs. This paper proposes VisualRWKV-HM, a model with linear complexity that incorporates a hybrid mixing mechanism combining time mixing and cross state mixing. This design achieves an optimal balance in information utilization, enhancing performance and offering flexibility for various tasks. VisualRWKV-HM achieves SOTA performance across single-image, multi-image, and multi-view benchmarks, and significantly outperforms the vanilla VisualRWKV. It demonstrates high computational efficiency with a context length of 24K, being 2.96 times faster and reducing memory usage by 45.38% compared to the Transformer-based LLaVA-1.5. When compared to LongLLaVA, a hybrid model based on the Transformer-Mamba architecture, it consumes less memory and achieves a 24% improvement in throughput at a context length of 16K. Additionally, we show that VisualRWKV-HM has strong scalability, with the potential for improved performance by scaling up the state encoder and decoder.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103336"},"PeriodicalIF":15.5000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004099","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the success of Large Language Models, Visual Language Models (VLMs) have also developed rapidly. However, existing VLMs often face limitations due to their quadratic time and space complexity, which poses challenges for training and deployment. Linear VLMs have emerged as a solution, providing linear time and space complexity, along with advantages in training and deployment. Nevertheless, a performance gap remains compared to state-of-the-art (SOTA) VLMs. This paper proposes VisualRWKV-HM, a model with linear complexity that incorporates a hybrid mixing mechanism combining time mixing and cross state mixing. This design achieves an optimal balance in information utilization, enhancing performance and offering flexibility for various tasks. VisualRWKV-HM achieves SOTA performance across single-image, multi-image, and multi-view benchmarks, and significantly outperforms the vanilla VisualRWKV. It demonstrates high computational efficiency with a context length of 24K, being 2.96 times faster and reducing memory usage by 45.38% compared to the Transformer-based LLaVA-1.5. When compared to LongLLaVA, a hybrid model based on the Transformer-Mamba architecture, it consumes less memory and achieves a 24% improvement in throughput at a context length of 16K. Additionally, we show that VisualRWKV-HM has strong scalability, with the potential for improved performance by scaling up the state encoder and decoder.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.