Sign detection for cuneiform tablets

Yunus Cobanoglu, Luis Sáenz, Ilya Khait, Enrique Jiménez
{"title":"Sign detection for cuneiform tablets","authors":"Yunus Cobanoglu, Luis Sáenz, Ilya Khait, Enrique Jiménez","doi":"10.1515/itit-2024-0028","DOIUrl":null,"url":null,"abstract":"\n Among the many excavated cuneiform tablets, only a small portion has been analyzed by Assyriologists. Learning how to read cuneiform is a lengthy and challenging process that can take years to complete. This work aims to improve the automatic detection of cuneiform signs from 2D images of cuneiform tablets. The results can later be used for NLP tasks such as semantic annotation, word alignment and machine translation to assist Assyriologists in their research. We introduce the largest publicly available annotated dataset of cuneiform signs to date. It comprises of 52,102 signs from 315 fully annotated tablets, equating to 512 distinct images. In addition, we have preprocessed and refined four existing datasets, resulting in a comprehensive collection of 88,536 signs. Since some signs are not localized on fully annotated tablets, the total dataset encompasses 593 fully annotated cuneiform tablets, resulting in 654 images. Our efforts to expand this dataset are ongoing. Furthermore, we evaluate two state-of-the-art methods to establish benchmarks in the field. The first is a two-stage supervised sign detection approach that involves: (1) the identification of bounding boxes, and (2) the classification of each sign within these boxes. The second method employs an object detection model. Given the numerous classes and their varied distribution, the task of cuneiform sign detection poses a significant challenge in machine learning. This paper aims to lay a groundwork for future research, offering both a substantial dataset and initial methodologies for sign detection on cuneiform tablets.","PeriodicalId":512610,"journal":{"name":"it - Information Technology","volume":"19 9","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"it - Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/itit-2024-0028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Among the many excavated cuneiform tablets, only a small portion has been analyzed by Assyriologists. Learning how to read cuneiform is a lengthy and challenging process that can take years to complete. This work aims to improve the automatic detection of cuneiform signs from 2D images of cuneiform tablets. The results can later be used for NLP tasks such as semantic annotation, word alignment and machine translation to assist Assyriologists in their research. We introduce the largest publicly available annotated dataset of cuneiform signs to date. It comprises of 52,102 signs from 315 fully annotated tablets, equating to 512 distinct images. In addition, we have preprocessed and refined four existing datasets, resulting in a comprehensive collection of 88,536 signs. Since some signs are not localized on fully annotated tablets, the total dataset encompasses 593 fully annotated cuneiform tablets, resulting in 654 images. Our efforts to expand this dataset are ongoing. Furthermore, we evaluate two state-of-the-art methods to establish benchmarks in the field. The first is a two-stage supervised sign detection approach that involves: (1) the identification of bounding boxes, and (2) the classification of each sign within these boxes. The second method employs an object detection model. Given the numerous classes and their varied distribution, the task of cuneiform sign detection poses a significant challenge in machine learning. This paper aims to lay a groundwork for future research, offering both a substantial dataset and initial methodologies for sign detection on cuneiform tablets.
楔形文字碑的标志检测
在众多出土的楔形文字片中,亚述学家只分析了一小部分。学习如何阅读楔形文字是一个漫长而具有挑战性的过程,可能需要数年才能完成。这项工作旨在改进从楔形文字石碑的二维图像中自动检测楔形文字符号的能力。检测结果可用于语义注释、单词对齐和机器翻译等 NLP 任务,以协助亚述学家开展研究。我们介绍了迄今为止最大的楔形文字符号公开注释数据集。该数据集包括来自 315 块完整注释石碑的 52,102 个符号,相当于 512 个不同的图像。此外,我们还对现有的四个数据集进行了预处理和改进,最终形成了一个包含 88536 个标志的综合数据集。由于有些标志没有在完整注释的碑文上进行定位,因此数据集总共包括 593 块完整注释的楔形文字碑文,共生成 654 幅图像。我们正在努力扩大这个数据集。此外,我们还评估了两种最先进的方法,以建立该领域的基准。第一种是两阶段监督标志检测方法,包括:(1) 识别边界框,(2) 对这些框内的每个标志进行分类。第二种方法采用对象检测模型。鉴于楔形符号类别众多且分布各异,楔形符号检测任务给机器学习带来了巨大挑战。本文旨在为未来的研究奠定基础,提供了大量的数据集和楔形文字片上标志检测的初步方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信