{"title":"多模态大语言模型调优的最新进展","authors":"Zhen Wang, Lin Li, Long Chen","doi":"10.1002/aaai.70025","DOIUrl":null,"url":null,"abstract":"<p>Finetuning serves as the critical adaptation mechanism for multimodal large language models, bridging their pretrained knowledge with specialized downstream task requirements. This paper reviews recent finetuning advances across three key dimensions: (1) efficiency-oriented methods that reduce resource costs; (2) capability-specific techniques enhancing specialized multimodal skills; and (3) task-unifying approaches that bridge understanding and generation. We demonstrate how these directions transform multimodal large language models from versatile foundations into adaptive, human-aligned systems, providing researchers with a structured roadmap for developing next-generation multimodal AI.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 3","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70025","citationCount":"0","resultStr":"{\"title\":\"Recent advances in finetuning multimodal large language models\",\"authors\":\"Zhen Wang, Lin Li, Long Chen\",\"doi\":\"10.1002/aaai.70025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Finetuning serves as the critical adaptation mechanism for multimodal large language models, bridging their pretrained knowledge with specialized downstream task requirements. This paper reviews recent finetuning advances across three key dimensions: (1) efficiency-oriented methods that reduce resource costs; (2) capability-specific techniques enhancing specialized multimodal skills; and (3) task-unifying approaches that bridge understanding and generation. We demonstrate how these directions transform multimodal large language models from versatile foundations into adaptive, human-aligned systems, providing researchers with a structured roadmap for developing next-generation multimodal AI.</p>\",\"PeriodicalId\":7854,\"journal\":{\"name\":\"Ai Magazine\",\"volume\":\"46 3\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70025\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ai Magazine\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/aaai.70025\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.70025","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Recent advances in finetuning multimodal large language models
Finetuning serves as the critical adaptation mechanism for multimodal large language models, bridging their pretrained knowledge with specialized downstream task requirements. This paper reviews recent finetuning advances across three key dimensions: (1) efficiency-oriented methods that reduce resource costs; (2) capability-specific techniques enhancing specialized multimodal skills; and (3) task-unifying approaches that bridge understanding and generation. We demonstrate how these directions transform multimodal large language models from versatile foundations into adaptive, human-aligned systems, providing researchers with a structured roadmap for developing next-generation multimodal AI.
期刊介绍:
AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.