MI3S:一个多模态大语言模型辅助人工智能生成谈话头的质量评估框架

IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Yingjie Zhou, Zicheng Zhang, Sijing Wu, Jun Jia, Yanwei Jiang, Wei Sun, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
{"title":"MI3S:一个多模态大语言模型辅助人工智能生成谈话头的质量评估框架","authors":"Yingjie Zhou,&nbsp;Zicheng Zhang,&nbsp;Sijing Wu,&nbsp;Jun Jia,&nbsp;Yanwei Jiang,&nbsp;Wei Sun,&nbsp;Xiaohong Liu,&nbsp;Xiongkuo Min,&nbsp;Guangtao Zhai","doi":"10.1016/j.ipm.2025.104321","DOIUrl":null,"url":null,"abstract":"<div><div>Although current speech-driven technologies enable the rapid generation of AI-generated talking heads (AGTHs), human supervision remains necessary to ensure the quality of the output. However, manual evaluation becomes increasingly impractical for large-scale AGTH production due to its time-consuming and labor-intensive nature. To overcome this limitation, we propose a novel objective quality assessment framework, MI3S, which employs a <strong>M</strong>ultimodal Large Language Model (MLLM) to evaluate AGTHs across four key dimensions: <strong>I</strong>mage quality, <strong>I</strong>mage aesthetics, <strong>I</strong>dentity consistency, and <strong>S</strong>ound-lip synchronization. <strong>To capture temporal dynamics more effectively,</strong> we introduce a variable-length video memory filter (VVMF), inspired by principles of human visual cognition. The MI3S framework supports both zero-shot inference and supervised learning paradigms. On the THQA dataset comprising 800 AGTHs, MI3S achieves a prediction-human perceptual correlation coefficient of 0.7946, which exceeds that of existing quality assessment methods by 3.4%, thereby offering an efficient, robust, and objective solution for evaluating AGTH quality.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 1","pages":"Article 104321"},"PeriodicalIF":6.9000,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MI3S: A multimodal large language model assisted quality assessment framework for AI-generated talking heads\",\"authors\":\"Yingjie Zhou,&nbsp;Zicheng Zhang,&nbsp;Sijing Wu,&nbsp;Jun Jia,&nbsp;Yanwei Jiang,&nbsp;Wei Sun,&nbsp;Xiaohong Liu,&nbsp;Xiongkuo Min,&nbsp;Guangtao Zhai\",\"doi\":\"10.1016/j.ipm.2025.104321\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Although current speech-driven technologies enable the rapid generation of AI-generated talking heads (AGTHs), human supervision remains necessary to ensure the quality of the output. However, manual evaluation becomes increasingly impractical for large-scale AGTH production due to its time-consuming and labor-intensive nature. To overcome this limitation, we propose a novel objective quality assessment framework, MI3S, which employs a <strong>M</strong>ultimodal Large Language Model (MLLM) to evaluate AGTHs across four key dimensions: <strong>I</strong>mage quality, <strong>I</strong>mage aesthetics, <strong>I</strong>dentity consistency, and <strong>S</strong>ound-lip synchronization. <strong>To capture temporal dynamics more effectively,</strong> we introduce a variable-length video memory filter (VVMF), inspired by principles of human visual cognition. The MI3S framework supports both zero-shot inference and supervised learning paradigms. On the THQA dataset comprising 800 AGTHs, MI3S achieves a prediction-human perceptual correlation coefficient of 0.7946, which exceeds that of existing quality assessment methods by 3.4%, thereby offering an efficient, robust, and objective solution for evaluating AGTH quality.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"63 1\",\"pages\":\"Article 104321\"},\"PeriodicalIF\":6.9000,\"publicationDate\":\"2025-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325002626\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325002626","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

虽然目前的语音驱动技术能够快速生成人工智能生成的谈话头(AGTHs),但人类监督仍然是确保输出质量的必要条件。然而,人工评估由于其耗时和劳动密集型的性质,在大规模的AGTH生产中变得越来越不切实际。为了克服这一限制,我们提出了一种新的客观质量评估框架MI3S,它采用多模态大语言模型(MLLM)从四个关键维度来评估AGTHs:图像质量、图像美学、身份一致性和音唇同步。为了更有效地捕捉时间动态,我们引入了一种受人类视觉认知原理启发的变长视频记忆滤波器(VVMF)。MI3S框架支持零射击推理和监督学习范式。在包含800个AGTH的THQA数据集上,MI3S的预测-人感知相关系数为0.7946,比现有质量评价方法高出3.4%,为AGTH质量评价提供了高效、稳健、客观的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MI3S: A multimodal large language model assisted quality assessment framework for AI-generated talking heads
Although current speech-driven technologies enable the rapid generation of AI-generated talking heads (AGTHs), human supervision remains necessary to ensure the quality of the output. However, manual evaluation becomes increasingly impractical for large-scale AGTH production due to its time-consuming and labor-intensive nature. To overcome this limitation, we propose a novel objective quality assessment framework, MI3S, which employs a Multimodal Large Language Model (MLLM) to evaluate AGTHs across four key dimensions: Image quality, Image aesthetics, Identity consistency, and Sound-lip synchronization. To capture temporal dynamics more effectively, we introduce a variable-length video memory filter (VVMF), inspired by principles of human visual cognition. The MI3S framework supports both zero-shot inference and supervised learning paradigms. On the THQA dataset comprising 800 AGTHs, MI3S achieves a prediction-human perceptual correlation coefficient of 0.7946, which exceeds that of existing quality assessment methods by 3.4%, thereby offering an efficient, robust, and objective solution for evaluating AGTH quality.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信