DEArt:建立和评估欧洲艺术对象检测和姿态分类的数据集

IF 3.3 2区 综合性期刊 0 ARCHAEOLOGY
Artem Reshetnikov , Maria-Cristina Marinescu , Joaquim More Lopez , Sergio Mendoza , Nuno Freire , Monica Marrero , Eleftheria Tsoupra , Antoine Isaac
{"title":"DEArt:建立和评估欧洲艺术对象检测和姿态分类的数据集","authors":"Artem Reshetnikov ,&nbsp;Maria-Cristina Marinescu ,&nbsp;Joaquim More Lopez ,&nbsp;Sergio Mendoza ,&nbsp;Nuno Freire ,&nbsp;Monica Marrero ,&nbsp;Eleftheria Tsoupra ,&nbsp;Antoine Isaac","doi":"10.1016/j.culher.2025.07.022","DOIUrl":null,"url":null,"abstract":"<div><div>Annotation of cultural heritage artefacts allows finding and exploration of items relevant to user needs, supports functionality such as question answering or scene understanding, and in general facilitates the exposure of the society to our history and heritage. But most artefacts lack a description of their visual content due to the assumption that one sees the object; this often means that the annotations effort focuses on the historical and artistic context, information about the painter, or details about the execution and medium.</div><div>Without a significant body of visual content annotation, machines cannot integrate all this data to allow further analysis, query and inference, and cultural institutions cannot offer advanced functionality to their users and visitors. Given how time-consuming manual annotation is, and to enable the development of new technology and applications for cultural heritage, we have provided through DEArt the most extensive art dataset for object detection and pose classification to date. The current paper extends this work in several ways: (1) we introduce an approach for generating refined object and relationship labels without the need for manual annotations, (2) we compare the performance of our models with the most relevant state-of-the-art in both computer vision and cultural heritage, (3) we evaluate the annotations generated by our object detection model from a user viewpoint, for both correctness and relevance, and (4) we briefly discuss the fairness of our dataset.</div></div>","PeriodicalId":15480,"journal":{"name":"Journal of Cultural Heritage","volume":"75 ","pages":"Pages 258-266"},"PeriodicalIF":3.3000,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DEArt: Building and evaluating a dataset for object detection and pose classification for European art\",\"authors\":\"Artem Reshetnikov ,&nbsp;Maria-Cristina Marinescu ,&nbsp;Joaquim More Lopez ,&nbsp;Sergio Mendoza ,&nbsp;Nuno Freire ,&nbsp;Monica Marrero ,&nbsp;Eleftheria Tsoupra ,&nbsp;Antoine Isaac\",\"doi\":\"10.1016/j.culher.2025.07.022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Annotation of cultural heritage artefacts allows finding and exploration of items relevant to user needs, supports functionality such as question answering or scene understanding, and in general facilitates the exposure of the society to our history and heritage. But most artefacts lack a description of their visual content due to the assumption that one sees the object; this often means that the annotations effort focuses on the historical and artistic context, information about the painter, or details about the execution and medium.</div><div>Without a significant body of visual content annotation, machines cannot integrate all this data to allow further analysis, query and inference, and cultural institutions cannot offer advanced functionality to their users and visitors. Given how time-consuming manual annotation is, and to enable the development of new technology and applications for cultural heritage, we have provided through DEArt the most extensive art dataset for object detection and pose classification to date. The current paper extends this work in several ways: (1) we introduce an approach for generating refined object and relationship labels without the need for manual annotations, (2) we compare the performance of our models with the most relevant state-of-the-art in both computer vision and cultural heritage, (3) we evaluate the annotations generated by our object detection model from a user viewpoint, for both correctness and relevance, and (4) we briefly discuss the fairness of our dataset.</div></div>\",\"PeriodicalId\":15480,\"journal\":{\"name\":\"Journal of Cultural Heritage\",\"volume\":\"75 \",\"pages\":\"Pages 258-266\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2025-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Cultural Heritage\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1296207425001542\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"ARCHAEOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cultural Heritage","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1296207425001542","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"ARCHAEOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

文化遗产文物的注释允许发现和探索与用户需求相关的物品,支持诸如问答或场景理解之类的功能,并且总体上促进了社会对我们的历史和遗产的暴露。但大多数人工制品缺乏对其视觉内容的描述,因为人们假设自己看到了物品;这通常意味着注释工作侧重于历史和艺术背景,关于画家的信息,或关于执行和媒介的细节。如果没有大量的视觉内容注释,机器就无法整合所有这些数据以进行进一步的分析、查询和推理,文化机构也无法为用户和访客提供高级功能。考虑到手工标注是多么耗时,并且为了促进文化遗产新技术和应用的发展,我们通过DEArt提供了迄今为止最广泛的用于物体检测和姿态分类的艺术数据集。目前的论文从几个方面扩展了这项工作:(1)我们引入了一种方法来生成精炼的对象和关系标签,而不需要手动注释;(2)我们将我们的模型的性能与计算机视觉和文化遗产中最相关的最新技术进行比较;(3)我们从用户的角度评估我们的对象检测模型生成的注释的正确性和相关性;(4)我们简要讨论了我们数据集的公平性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DEArt: Building and evaluating a dataset for object detection and pose classification for European art
Annotation of cultural heritage artefacts allows finding and exploration of items relevant to user needs, supports functionality such as question answering or scene understanding, and in general facilitates the exposure of the society to our history and heritage. But most artefacts lack a description of their visual content due to the assumption that one sees the object; this often means that the annotations effort focuses on the historical and artistic context, information about the painter, or details about the execution and medium.
Without a significant body of visual content annotation, machines cannot integrate all this data to allow further analysis, query and inference, and cultural institutions cannot offer advanced functionality to their users and visitors. Given how time-consuming manual annotation is, and to enable the development of new technology and applications for cultural heritage, we have provided through DEArt the most extensive art dataset for object detection and pose classification to date. The current paper extends this work in several ways: (1) we introduce an approach for generating refined object and relationship labels without the need for manual annotations, (2) we compare the performance of our models with the most relevant state-of-the-art in both computer vision and cultural heritage, (3) we evaluate the annotations generated by our object detection model from a user viewpoint, for both correctness and relevance, and (4) we briefly discuss the fairness of our dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Cultural Heritage
Journal of Cultural Heritage 综合性期刊-材料科学:综合
CiteScore
6.80
自引率
9.70%
发文量
166
审稿时长
52 days
期刊介绍: The Journal of Cultural Heritage publishes original papers which comprise previously unpublished data and present innovative methods concerning all aspects of science and technology of cultural heritage as well as interpretation and theoretical issues related to preservation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信