Pedro A. Villa-García, Raúl Alonso-Calvo, Miguel García-Remesal
{"title":"End-to-end entity extraction from OCRed texts using summarization models","authors":"Pedro A. Villa-García, Raúl Alonso-Calvo, Miguel García-Remesal","doi":"10.1007/s00521-024-10422-9","DOIUrl":null,"url":null,"abstract":"<p>A novel methodology is introduced for extracting entities from noisy scanned documents by using end-to-end data and reformulating the entity extraction task as a text summarization problem. This approach offers two significant advantages over traditional entity extraction methods while maintaining comparable performance. First, it utilizes preexisting data to construct datasets, thereby eliminating the need for labor-intensive annotation procedures. Second, it employs multitask learning, enabling the training of a model via a single dataset. To evaluate our approach against state-of-the-art methods, we adapted three commonly used datasets, namely, Conference on Natural Language Learning (CoNLL++), few-shot named entity recognition (Few-NERD), and WikiNEuRal domain adaptation (WikiNEuRal + DA), to the format required by our methodology. We subsequently fine-tuned four sequence-to-sequence models: text-to-text transfer transformer (T5), fine-tuned language net T5 (FLAN-T5), bidirectional autoregressive transformer (BART), and pretraining with extracted gap sentences for abstractive summarization sequence-to-sequence models (PEGASUS). The results indicate that, in the absence of optical character recognition (OCR) noise, the BART model performs comparably to state-of-the-art methods. Furthermore, the performance degradation was limited to 3.49–5.23% when 39–62% of the sentences contained OCR noise. This performance is significantly superior to that of previous studies, which reported a 10–20% decrease in the F1 score with texts that had a 20% OCR error rate. Our experimental results demonstrate that a single model trained via our methodology can reliably extract entities from noisy OCRed texts, unlike existing state-of-the-art approaches, which require separate models for correcting OCR errors and extracting entities.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00521-024-10422-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A novel methodology is introduced for extracting entities from noisy scanned documents by using end-to-end data and reformulating the entity extraction task as a text summarization problem. This approach offers two significant advantages over traditional entity extraction methods while maintaining comparable performance. First, it utilizes preexisting data to construct datasets, thereby eliminating the need for labor-intensive annotation procedures. Second, it employs multitask learning, enabling the training of a model via a single dataset. To evaluate our approach against state-of-the-art methods, we adapted three commonly used datasets, namely, Conference on Natural Language Learning (CoNLL++), few-shot named entity recognition (Few-NERD), and WikiNEuRal domain adaptation (WikiNEuRal + DA), to the format required by our methodology. We subsequently fine-tuned four sequence-to-sequence models: text-to-text transfer transformer (T5), fine-tuned language net T5 (FLAN-T5), bidirectional autoregressive transformer (BART), and pretraining with extracted gap sentences for abstractive summarization sequence-to-sequence models (PEGASUS). The results indicate that, in the absence of optical character recognition (OCR) noise, the BART model performs comparably to state-of-the-art methods. Furthermore, the performance degradation was limited to 3.49–5.23% when 39–62% of the sentences contained OCR noise. This performance is significantly superior to that of previous studies, which reported a 10–20% decrease in the F1 score with texts that had a 20% OCR error rate. Our experimental results demonstrate that a single model trained via our methodology can reliably extract entities from noisy OCRed texts, unlike existing state-of-the-art approaches, which require separate models for correcting OCR errors and extracting entities.