Are metrics measuring what they should? An evaluation of Image Captioning task metrics

IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Othón González-Chávez , Guillermo Ruiz , Daniela Moctezuma , Tania Ramirez-delReal
{"title":"Are metrics measuring what they should? An evaluation of Image Captioning task metrics","authors":"Othón González-Chávez ,&nbsp;Guillermo Ruiz ,&nbsp;Daniela Moctezuma ,&nbsp;Tania Ramirez-delReal","doi":"10.1016/j.image.2023.117071","DOIUrl":null,"url":null,"abstract":"<div><p><span>Image Captioning is a current research task to describe the image content using the objects and their relationships in the scene. Two important research areas converge to tackle this task: artificial vision and natural language processing. In Image Captioning, as in any computational intelligence task, the performance metrics are crucial for knowing how well (or bad) a method performs. In recent years, it has been observed that classical metrics based on </span><span><math><mi>n</mi></math></span>-grams are insufficient to capture the semantics and the critical meaning to describe the content in an image. Looking to measure how well or not the current and more recent metrics are doing, in this article, we present an evaluation of several kinds of Image Captioning metrics and a comparison between them using the well-known datasets, MS-COCO and Flickr8k. The metrics were selected from the most used in prior works; they are those based on <span><math><mi>n</mi></math></span>-grams, such as BLEU, SacreBLEU, METEOR, ROGUE-L, CIDEr, SPICE, and those based on embeddings, such as BERTScore and CLIPScore. We designed two scenarios for this: (1) a set of artificially built captions with several qualities and (2) a comparison of some state-of-the-art Image Captioning methods. Interesting findings were found trying to answer the questions: Are the current metrics helping to produce high-quality captions? How do actual metrics compare to each other? What are the metrics <em>really</em> measuring?</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"120 ","pages":"Article 117071"},"PeriodicalIF":3.4000,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596523001534","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Image Captioning is a current research task to describe the image content using the objects and their relationships in the scene. Two important research areas converge to tackle this task: artificial vision and natural language processing. In Image Captioning, as in any computational intelligence task, the performance metrics are crucial for knowing how well (or bad) a method performs. In recent years, it has been observed that classical metrics based on n-grams are insufficient to capture the semantics and the critical meaning to describe the content in an image. Looking to measure how well or not the current and more recent metrics are doing, in this article, we present an evaluation of several kinds of Image Captioning metrics and a comparison between them using the well-known datasets, MS-COCO and Flickr8k. The metrics were selected from the most used in prior works; they are those based on n-grams, such as BLEU, SacreBLEU, METEOR, ROGUE-L, CIDEr, SPICE, and those based on embeddings, such as BERTScore and CLIPScore. We designed two scenarios for this: (1) a set of artificially built captions with several qualities and (2) a comparison of some state-of-the-art Image Captioning methods. Interesting findings were found trying to answer the questions: Are the current metrics helping to produce high-quality captions? How do actual metrics compare to each other? What are the metrics really measuring?

衡量标准是衡量他们应该做什么吗?图像字幕任务度量的评估
图像字幕是当前的一项研究任务,旨在使用场景中的对象及其关系来描述图像内容。两个重要的研究领域共同致力于解决这一任务:人工视觉和自然语言处理。在图像字幕中,与任何计算智能任务一样,性能指标对于了解一种方法的性能是至关重要的。近年来,人们观察到,基于n-gram的经典度量不足以捕捉描述图像中内容的语义和关键意义。为了衡量当前和最近的指标表现如何,在本文中,我们对几种图像字幕指标进行了评估,并使用众所周知的数据集MS-COCO和Flickr8k对它们进行了比较。这些指标是从以前工作中使用最多的指标中选择的;它们是基于n-gram的,如BLEU、SacreBLEU、METEOR、ROGUE-L、CIDEr、SPICE,以及基于嵌入的,如BERTScore和CLIPScore。我们为此设计了两个场景:(1)一组具有多种质量的人工构建的字幕;(2)一些最先进的图像字幕方法的比较。有趣的发现试图回答这些问题:当前的指标是否有助于制作高质量的字幕?实际指标之间的比较如何?衡量标准到底是什么?
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Signal Processing-Image Communication
Signal Processing-Image Communication 工程技术-工程:电子与电气
CiteScore
8.40
自引率
2.90%
发文量
138
审稿时长
5.2 months
期刊介绍: Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following: To present a forum for the advancement of theory and practice of image communication. To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems. To contribute to a rapid information exchange between the industrial and academic environments. The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world. Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments. Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信