Hendrik Hameeuw, Katrien De Graef, Gustav Ryberg Smidt, Anne Goddeeris, Timo Homburg, Krishna Kumar Thirukokaranam Chandrasekar
{"title":"为机器学习 OCR 训练模型准备巴比伦古楔形文字片的多层可视化,以实现自动符号识别","authors":"Hendrik Hameeuw, Katrien De Graef, Gustav Ryberg Smidt, Anne Goddeeris, Timo Homburg, Krishna Kumar Thirukokaranam Chandrasekar","doi":"10.1515/itit-2023-0063","DOIUrl":null,"url":null,"abstract":"Abstract In the framework of the CUNE-IIIF-ORM project the aim is to train an Artificial Intelligence Optical Character Recognition (AI-OCR) model that can automatically locate and identify cuneiform signs on photorealistic representations of Old Babylonian texts (c. 2000–1600 B.C.E.). In order to train the model, c. 200 documentary clay tablets have been selected. They are manually annotated by specialist cuneiformists on a set of 12 still raster images generated from interactive Multi-Light Reflectance images. This image set includes visualisations with varying light angles and simplifications based on the dept information on the impressed signs in the surface. In the Cuneur Cuneiform Annotator, a Gitlab-based web application, the identified cuneiform signs are annotated with polygons and enriched with metadata. This methodology builds a qualitative annotated training corpus of approximately 20,000 cropped signs (i.e. 240,000 visualizations), all with their unicode codepoint and conventional sign name. It will act as a multi-layerd core dataset for the further development and fine-tuning of a machine learning OCR training model for the Old Babylonian cuneiform script. This paper discusses how the physical nature of handwritten inscribed Old Babylonian documentary clay tablets challenges the annotation and metadating task, and how these have been addressed within the CUNE-IIIF-ORM project to achieve an effective training corpus to support the training of a machine learning OCR model. ACM CCS Applied computing → Document management and text processing → Document capture → Optical character recognition; Applied computing → Arts and humanities → Language translation.","PeriodicalId":512610,"journal":{"name":"it - Information Technology","volume":"15 9","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Preparing multi-layered visualisations of Old Babylonian cuneiform tablets for a machine learning OCR training model towards automated sign recognition\",\"authors\":\"Hendrik Hameeuw, Katrien De Graef, Gustav Ryberg Smidt, Anne Goddeeris, Timo Homburg, Krishna Kumar Thirukokaranam Chandrasekar\",\"doi\":\"10.1515/itit-2023-0063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract In the framework of the CUNE-IIIF-ORM project the aim is to train an Artificial Intelligence Optical Character Recognition (AI-OCR) model that can automatically locate and identify cuneiform signs on photorealistic representations of Old Babylonian texts (c. 2000–1600 B.C.E.). In order to train the model, c. 200 documentary clay tablets have been selected. They are manually annotated by specialist cuneiformists on a set of 12 still raster images generated from interactive Multi-Light Reflectance images. This image set includes visualisations with varying light angles and simplifications based on the dept information on the impressed signs in the surface. In the Cuneur Cuneiform Annotator, a Gitlab-based web application, the identified cuneiform signs are annotated with polygons and enriched with metadata. This methodology builds a qualitative annotated training corpus of approximately 20,000 cropped signs (i.e. 240,000 visualizations), all with their unicode codepoint and conventional sign name. It will act as a multi-layerd core dataset for the further development and fine-tuning of a machine learning OCR training model for the Old Babylonian cuneiform script. This paper discusses how the physical nature of handwritten inscribed Old Babylonian documentary clay tablets challenges the annotation and metadating task, and how these have been addressed within the CUNE-IIIF-ORM project to achieve an effective training corpus to support the training of a machine learning OCR model. ACM CCS Applied computing → Document management and text processing → Document capture → Optical character recognition; Applied computing → Arts and humanities → Language translation.\",\"PeriodicalId\":512610,\"journal\":{\"name\":\"it - Information Technology\",\"volume\":\"15 9\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"it - Information Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/itit-2023-0063\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"it - Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/itit-2023-0063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Preparing multi-layered visualisations of Old Babylonian cuneiform tablets for a machine learning OCR training model towards automated sign recognition
Abstract In the framework of the CUNE-IIIF-ORM project the aim is to train an Artificial Intelligence Optical Character Recognition (AI-OCR) model that can automatically locate and identify cuneiform signs on photorealistic representations of Old Babylonian texts (c. 2000–1600 B.C.E.). In order to train the model, c. 200 documentary clay tablets have been selected. They are manually annotated by specialist cuneiformists on a set of 12 still raster images generated from interactive Multi-Light Reflectance images. This image set includes visualisations with varying light angles and simplifications based on the dept information on the impressed signs in the surface. In the Cuneur Cuneiform Annotator, a Gitlab-based web application, the identified cuneiform signs are annotated with polygons and enriched with metadata. This methodology builds a qualitative annotated training corpus of approximately 20,000 cropped signs (i.e. 240,000 visualizations), all with their unicode codepoint and conventional sign name. It will act as a multi-layerd core dataset for the further development and fine-tuning of a machine learning OCR training model for the Old Babylonian cuneiform script. This paper discusses how the physical nature of handwritten inscribed Old Babylonian documentary clay tablets challenges the annotation and metadating task, and how these have been addressed within the CUNE-IIIF-ORM project to achieve an effective training corpus to support the training of a machine learning OCR model. ACM CCS Applied computing → Document management and text processing → Document capture → Optical character recognition; Applied computing → Arts and humanities → Language translation.