Christoph Ringlstetter, Ulrich Reffle, Annette Gotscharek, K. Schulz
{"title":"为文本纠错派生与符号相关的编辑权重——错误字典的使用","authors":"Christoph Ringlstetter, Ulrich Reffle, Annette Gotscharek, K. Schulz","doi":"10.1109/ICDAR.2007.99","DOIUrl":null,"url":null,"abstract":"Most systems for correcting errors in texts make use of specific word distance measures such as the Levenshtein distance. In many experiments it has been shown that correction accuracy is improved when using edit weights that depend on the particular symbols of the edit operation. However, most proposed approaches so far rely on high amounts of training data where errors and their corrections are collected. In practice, the preparation of suitable ground truth data is often too costly, which means that uniform edit costs are used. In this paper we evaluate approaches for deriving symbol dependent edit weights that do not need any ground truth training data, comparing them with methods based on ground truth training. We suggest a new approach where special error dictionaries are used to estimate weights. The method is simple and very efficient, needing one pass of the document to be corrected. Our experiments with different OCR systems and textual data show that the method consistently improves correction accuracy in a significant way, often leading to results comparable to those achieved with ground truth training.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Deriving Symbol Dependent Edit Weights for Text Correction_The Use of Error Dictionaries\",\"authors\":\"Christoph Ringlstetter, Ulrich Reffle, Annette Gotscharek, K. Schulz\",\"doi\":\"10.1109/ICDAR.2007.99\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most systems for correcting errors in texts make use of specific word distance measures such as the Levenshtein distance. In many experiments it has been shown that correction accuracy is improved when using edit weights that depend on the particular symbols of the edit operation. However, most proposed approaches so far rely on high amounts of training data where errors and their corrections are collected. In practice, the preparation of suitable ground truth data is often too costly, which means that uniform edit costs are used. In this paper we evaluate approaches for deriving symbol dependent edit weights that do not need any ground truth training data, comparing them with methods based on ground truth training. We suggest a new approach where special error dictionaries are used to estimate weights. The method is simple and very efficient, needing one pass of the document to be corrected. Our experiments with different OCR systems and textual data show that the method consistently improves correction accuracy in a significant way, often leading to results comparable to those achieved with ground truth training.\",\"PeriodicalId\":279268,\"journal\":{\"name\":\"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDAR.2007.99\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDAR.2007.99","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deriving Symbol Dependent Edit Weights for Text Correction_The Use of Error Dictionaries
Most systems for correcting errors in texts make use of specific word distance measures such as the Levenshtein distance. In many experiments it has been shown that correction accuracy is improved when using edit weights that depend on the particular symbols of the edit operation. However, most proposed approaches so far rely on high amounts of training data where errors and their corrections are collected. In practice, the preparation of suitable ground truth data is often too costly, which means that uniform edit costs are used. In this paper we evaluate approaches for deriving symbol dependent edit weights that do not need any ground truth training data, comparing them with methods based on ground truth training. We suggest a new approach where special error dictionaries are used to estimate weights. The method is simple and very efficient, needing one pass of the document to be corrected. Our experiments with different OCR systems and textual data show that the method consistently improves correction accuracy in a significant way, often leading to results comparable to those achieved with ground truth training.