环中模型(MILO):利用 LLM 加速多模态人工智能数据注释

Yifan Wang, David Stevens, Pranay Shah, Wenwen Jiang, Miao Liu, Xu Chen, Robert Kuo, Na Li, Boying Gong, Daniel Lee, Jiabo Hu, Ning Zhang, Bob Kamma
{"title":"环中模型(MILO):利用 LLM 加速多模态人工智能数据注释","authors":"Yifan Wang, David Stevens, Pranay Shah, Wenwen Jiang, Miao Liu, Xu Chen, Robert Kuo, Na Li, Boying Gong, Daniel Lee, Jiabo Hu, Ning Zhang, Bob Kamma","doi":"arxiv-2409.10702","DOIUrl":null,"url":null,"abstract":"The growing demand for AI training data has transformed data annotation into\na global industry, but traditional approaches relying on human annotators are\noften time-consuming, labor-intensive, and prone to inconsistent quality. We\npropose the Model-in-the-Loop (MILO) framework, which integrates AI/ML models\ninto the annotation process. Our research introduces a collaborative paradigm\nthat leverages the strengths of both professional human annotators and large\nlanguage models (LLMs). By employing LLMs as pre-annotation and real-time\nassistants, and judges on annotator responses, MILO enables effective\ninteraction patterns between human annotators and LLMs. Three empirical studies\non multimodal data annotation demonstrate MILO's efficacy in reducing handling\ntime, improving data quality, and enhancing annotator experiences. We also\nintroduce quality rubrics for flexible evaluation and fine-grained feedback on\nopen-ended annotations. The MILO framework has implications for accelerating\nAI/ML development, reducing reliance on human annotation alone, and promoting\nbetter alignment between human and machine values.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"54 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Model-in-the-Loop (MILO): Accelerating Multimodal AI Data Annotation with LLMs\",\"authors\":\"Yifan Wang, David Stevens, Pranay Shah, Wenwen Jiang, Miao Liu, Xu Chen, Robert Kuo, Na Li, Boying Gong, Daniel Lee, Jiabo Hu, Ning Zhang, Bob Kamma\",\"doi\":\"arxiv-2409.10702\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The growing demand for AI training data has transformed data annotation into\\na global industry, but traditional approaches relying on human annotators are\\noften time-consuming, labor-intensive, and prone to inconsistent quality. We\\npropose the Model-in-the-Loop (MILO) framework, which integrates AI/ML models\\ninto the annotation process. Our research introduces a collaborative paradigm\\nthat leverages the strengths of both professional human annotators and large\\nlanguage models (LLMs). By employing LLMs as pre-annotation and real-time\\nassistants, and judges on annotator responses, MILO enables effective\\ninteraction patterns between human annotators and LLMs. Three empirical studies\\non multimodal data annotation demonstrate MILO's efficacy in reducing handling\\ntime, improving data quality, and enhancing annotator experiences. We also\\nintroduce quality rubrics for flexible evaluation and fine-grained feedback on\\nopen-ended annotations. The MILO framework has implications for accelerating\\nAI/ML development, reducing reliance on human annotation alone, and promoting\\nbetter alignment between human and machine values.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"54 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10702\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

对人工智能训练数据日益增长的需求已将数据标注转变为一项全球性产业,但依赖人工标注员的传统方法往往耗时、耗力,而且容易出现质量不一致的问题。我们提出了 "环中模型"(MILO)框架,将人工智能/ML 模型集成到注释过程中。我们的研究引入了一种协作范式,充分利用了专业人工标注员和大型语言模型(LLM)的优势。通过使用 LLM 作为预注释和实时助手,以及对注释者的反应进行评判,MILO 实现了人类注释者和 LLM 之间的有效交互模式。三项关于多模态数据标注的实证研究证明了 MILO 在减少处理时间、提高数据质量和增强标注者体验方面的功效。我们还引入了质量评分标准,以便对开放式注释进行灵活评估和精细反馈。MILO 框架对加速人工智能/ML 开发、减少对人工注释的依赖以及促进人与机器价值之间更好的协调都具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Model-in-the-Loop (MILO): Accelerating Multimodal AI Data Annotation with LLMs
The growing demand for AI training data has transformed data annotation into a global industry, but traditional approaches relying on human annotators are often time-consuming, labor-intensive, and prone to inconsistent quality. We propose the Model-in-the-Loop (MILO) framework, which integrates AI/ML models into the annotation process. Our research introduces a collaborative paradigm that leverages the strengths of both professional human annotators and large language models (LLMs). By employing LLMs as pre-annotation and real-time assistants, and judges on annotator responses, MILO enables effective interaction patterns between human annotators and LLMs. Three empirical studies on multimodal data annotation demonstrate MILO's efficacy in reducing handling time, improving data quality, and enhancing annotator experiences. We also introduce quality rubrics for flexible evaluation and fine-grained feedback on open-ended annotations. The MILO framework has implications for accelerating AI/ML development, reducing reliance on human annotation alone, and promoting better alignment between human and machine values.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信