Yongzhi Li , Pengle Zhang , Meng Sun , Jin Huang , Ruhan He
{"title":"采用双分支上下文内容模块的预训练变换器,用于文档图像中的表格检测","authors":"Yongzhi Li , Pengle Zhang , Meng Sun , Jin Huang , Ruhan He","doi":"10.1016/j.vrih.2024.06.003","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Document images such as statistical reports and scientific journals are widely used in information technology. Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction. However, because of the diversity in the shapes and sizes of tables, existing table detection methods adapted from general object detection algorithms, have not yet achieved satisfactory results. Incorrect detection results might lead to the loss of critical information.</div></div><div><h3>Methods</h3><div>Therefore, we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections. To better deal with table areas of different shapes and sizes, we added a dual-branch context content attention module (DCCAM) to high-dimensional features to extract context content information, thereby enhancing the network's ability to learn shape features. For feature fusion at different scales, we replaced the original 3×3 convolution with a multilayer residual module, which contains enhanced gradient flow information to improve the feature representation and extraction capability.</div></div><div><h3>Results</h3><div>We evaluated our method on public document datasets and compared it with previous methods, which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score. <span><span>https://github.com/YongZ-Lee/TD-DCCAM</span><svg><path></path></svg></span></div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 408-420"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pre-training transformer with dual-branch context content module for table detection in document images\",\"authors\":\"Yongzhi Li , Pengle Zhang , Meng Sun , Jin Huang , Ruhan He\",\"doi\":\"10.1016/j.vrih.2024.06.003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><div>Document images such as statistical reports and scientific journals are widely used in information technology. Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction. However, because of the diversity in the shapes and sizes of tables, existing table detection methods adapted from general object detection algorithms, have not yet achieved satisfactory results. Incorrect detection results might lead to the loss of critical information.</div></div><div><h3>Methods</h3><div>Therefore, we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections. To better deal with table areas of different shapes and sizes, we added a dual-branch context content attention module (DCCAM) to high-dimensional features to extract context content information, thereby enhancing the network's ability to learn shape features. For feature fusion at different scales, we replaced the original 3×3 convolution with a multilayer residual module, which contains enhanced gradient flow information to improve the feature representation and extraction capability.</div></div><div><h3>Results</h3><div>We evaluated our method on public document datasets and compared it with previous methods, which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score. <span><span>https://github.com/YongZ-Lee/TD-DCCAM</span><svg><path></path></svg></span></div></div>\",\"PeriodicalId\":33538,\"journal\":{\"name\":\"Virtual Reality Intelligent Hardware\",\"volume\":\"6 5\",\"pages\":\"Pages 408-420\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Virtual Reality Intelligent Hardware\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2096579624000330\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579624000330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
摘要
背景统计报告和科学期刊等文档图像被广泛应用于信息技术领域。准确检测文档图像中的表格区域是完成信息提取等任务的必要前提。然而,由于表格的形状和大小多种多样,从一般对象检测算法中改编而来的现有表格检测方法尚未取得令人满意的结果。因此,我们提出了一种新颖的端到端可训练深度网络,并结合自监督预训练转换器进行特征提取,以尽量减少错误检测。为了更好地处理不同形状和大小的桌面区域,我们在高维特征中添加了双分支上下文内容关注模块(DCCAM),以提取上下文内容信息,从而增强网络学习形状特征的能力。对于不同尺度的特征融合,我们用多层残差模块取代了原来的 3×3 卷积,该模块包含增强的梯度流信息,从而提高了特征表示和提取能力。结果我们在公共文档数据集上对我们的方法进行了评估,并将其与之前的方法进行了比较,后者在召回率和 F1 分数等评估指标方面取得了最先进的结果。https://github.com/YongZ-Lee/TD-DCCAM。
Pre-training transformer with dual-branch context content module for table detection in document images
Background
Document images such as statistical reports and scientific journals are widely used in information technology. Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction. However, because of the diversity in the shapes and sizes of tables, existing table detection methods adapted from general object detection algorithms, have not yet achieved satisfactory results. Incorrect detection results might lead to the loss of critical information.
Methods
Therefore, we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections. To better deal with table areas of different shapes and sizes, we added a dual-branch context content attention module (DCCAM) to high-dimensional features to extract context content information, thereby enhancing the network's ability to learn shape features. For feature fusion at different scales, we replaced the original 3×3 convolution with a multilayer residual module, which contains enhanced gradient flow information to improve the feature representation and extraction capability.
Results
We evaluated our method on public document datasets and compared it with previous methods, which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score. https://github.com/YongZ-Lee/TD-DCCAM