2019 International Conference on Document Analysis and Recognition (ICDAR)最新文献

筛选
英文 中文
ICDAR 2019 Robust Reading Challenge on Reading Chinese Text on Signboard ICDAR 2019稳健阅读挑战赛:阅读标牌上的中文文字
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00253
Xi Liu, Rui Zhang, Yongsheng Zhou, Qianyi Jiang, Qi Song, Nan Li, Kai Zhou, Lei Wang, Dong Wang, Minghui Liao, Mingkun Yang, X. Bai, Baoguang Shi, Dimosthenis Karatzas, Shijian Lu, C. V. Jawahar
{"title":"ICDAR 2019 Robust Reading Challenge on Reading Chinese Text on Signboard","authors":"Xi Liu, Rui Zhang, Yongsheng Zhou, Qianyi Jiang, Qi Song, Nan Li, Kai Zhou, Lei Wang, Dong Wang, Minghui Liao, Mingkun Yang, X. Bai, Baoguang Shi, Dimosthenis Karatzas, Shijian Lu, C. V. Jawahar","doi":"10.1109/ICDAR.2019.00253","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00253","url":null,"abstract":"Chinese scene text reading is one of the most challenging problems in computer vision and has attracted great interest. Different from English text, Chinese has more than 6000 commonly used characters and Chinese characters can be arranged in various layouts with numerous fonts. The Chinese signboards in street view are a good choice for Chinese scene text images since they have different backgrounds, fonts and layouts. We organized a competition called ICDAR2019-ReCTS, which mainly focuses on reading Chinese text on signboard. This report presents the final results of the competition. A large-scale dataset of 25,000 annotated signboard images, in which all the text lines and characters are annotated with locations and transcriptions, were released. Four tasks, namely character recognition, text line recognition, text line detection and end-to-end recognition were set up. Besides, considering the Chinese text ambiguity issue, we proposed a multi ground truth (multi-GT) evaluation method to make evaluation fairer. The competition started on March 1, 2019 and ended on April 30, 2019. 262 submissions from 46 teams are received. Most of the participants come from universities, research institutes, and tech companies in China. There are also some participants from the United States, Australia, Singapore, and Korea. 21 teams submit results for Task 1, 23 teams submit results for Task 2, 24 teams submit results for Task 3, and 13 teams submit results for Task 4. The official website for the competition is http://rrc.cvc.uab.es/?ch=12.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114263880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
CRNN Based Jersey-Bib Number/Text Recognition in Sports and Marathon Images 基于CRNN的运动和马拉松图像中球衣号码布号码/文本识别
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00186
Sauradip Nag, Raghavendra Ramachandra, P. Shivakumara, U. Pal, Tong Lu, Mohan S. Kankanhalli
{"title":"CRNN Based Jersey-Bib Number/Text Recognition in Sports and Marathon Images","authors":"Sauradip Nag, Raghavendra Ramachandra, P. Shivakumara, U. Pal, Tong Lu, Mohan S. Kankanhalli","doi":"10.1109/ICDAR.2019.00186","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00186","url":null,"abstract":"The primary challenge in tracing the participants in sports and marathon video or images is to detect and localize the jersey/Bib number that may present in different regions of their outfit captured in cluttered environment conditions. In this work, we proposed a new framework based on detecting the human body parts such that both Jersey Bib number and text is localized reliably. To achieve this, the proposed method first detects and localize the human in a given image using Single Shot Multibox Detector (SSD). In the next step, different human body parts namely, Torso, Left Thigh, Right Thigh, that generally contain a Bib number or text region is automatically extracted. These detected individual parts are processed individually to detect the Jersey Bib number/text using a deep CNN network based on the 2-channel architecture based on the novel adaptive weighting loss function. Finally, the detected text is cropped out and fed to a CNN-RNN based deep model abbreviated as CRNN for recognizing jersey/Bib/text. Extensive experiments are carried out on the four different datasets including both bench-marking dataset and a new dataset. The performance of the proposed method is compared with the state-of-the-art methods on all four datasets that indicates the improved performance of the proposed method on all four datasets.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128737811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
HoughNet: Neural Network Architecture for Vanishing Points Detection HoughNet:用于消失点检测的神经网络架构
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00140
A. Sheshkus, A. Ingacheva, V. Arlazarov, D. Nikolaev
{"title":"HoughNet: Neural Network Architecture for Vanishing Points Detection","authors":"A. Sheshkus, A. Ingacheva, V. Arlazarov, D. Nikolaev","doi":"10.1109/ICDAR.2019.00140","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00140","url":null,"abstract":"In this paper we introduce a novel neural network architecture based on Fast Hough Transform layer. The layer of this type allows our neural network to accumulate features from linear areas across the entire image instead of local areas. We demonstrate its potential by solving the problem of vanishing points detection in the images of documents. Such problem occurs when dealing with camera shots of the documents in uncontrolled conditions. In this case, the document image can suffer several specific distortions including projective transform. To train our model, we use MIDV-500 dataset and provide testing results. Strong generalization ability of the suggested method is proven with its applying to a completely different ICDAR 2011 dewarping contest. In previously published papers considering this dataset authors measured quality of vanishing point detection by counting correctly recognized words with open OCR engine Tesseract. To compare with them, we reproduce this experiment and show that our method outperforms the state-of-the-art result.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129156953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Towards Automated Evaluation of Handwritten Assessments 迈向手写评估的自动化评估
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00075
Vijay Rowtula, S. Oota, C. V. Jawahar
{"title":"Towards Automated Evaluation of Handwritten Assessments","authors":"Vijay Rowtula, S. Oota, C. V. Jawahar","doi":"10.1109/ICDAR.2019.00075","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00075","url":null,"abstract":"Automated evaluation of handwritten answers has been a challenging problem for scaling the education system for many years. Speeding up the evaluation remains as the major bottleneck for enhancing the throughput of instructors. This paper describes an effective method for automatically evaluating the short descriptive handwritten answers from the digitized images. Our goal is to evaluate a student's handwritten answer by assigning an evaluation score that is comparable to the human-assigned scores. Existing works in this domain mainly focused on evaluating handwritten essays with handcrafted, non-semantic features. Our contribution is two-fold: 1) we model this problem as a self-supervised, feature-based classification problem, which can fine-tune itself for each question without any explicit supervision. 2) We introduce the usage of semantic analysis for auto-evaluation in handwritten text space using the combination of Information Retrieval and Extraction (IRE) and, Natural Language Processing (NLP) methods to derive a set of useful features. We tested our method on three datasets created from various domains, using the help of students of different age groups. Experiments show that our method performs comparably to that of human evaluators.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121525417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Recurrent Neural Network Approach for Table Field Extraction in Business Documents 递归神经网络在商务文档表字段提取中的应用
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00211
Clément Sage, A. Aussem, H. Elghazel, V. Eglin, Jérémy Espinas
{"title":"Recurrent Neural Network Approach for Table Field Extraction in Business Documents","authors":"Clément Sage, A. Aussem, H. Elghazel, V. Eglin, Jérémy Espinas","doi":"10.1109/ICDAR.2019.00211","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00211","url":null,"abstract":"Efficiently extracting information from documents issued by their partners is crucial for companies that face huge daily document flows. Particularly, tables contain most valuable information of business documents. However, their contents are challenging to automatically parse as tables from industrial contexts may have complex and ambiguous physical structure. Bypassing their structure recognition, we propose a generic method for end-to-end table field extraction that starts with the sequence of document tokens segmented by an OCR engine and directly tags each token with one of the possible field types. Similar to the state-of-the-art methods for non-tabular field extraction, our approach resorts to a token level recurrent neural network combining spatial and textual features. We empirically assess the effectiveness of recurrent connections for our task by comparing our method with a baseline feedforward network having local context knowledge added to its inputs. We train and evaluate both approaches on a dataset of 28,570 purchase orders to retrieve the ID numbers and quantities of the ordered products. Our method outperforms the baseline with micro F1 score on unknown document layouts of 0.821 compared to 0.764.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126374804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts 缺陷:用于历史印度手稿布局解析的实例分割网络
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00164
A. Prusty, Sowmya Aitha, Abhishek Trivedi, Ravi Kiran Sarvadevabhatla
{"title":"Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts","authors":"A. Prusty, Sowmya Aitha, Abhishek Trivedi, Ravi Kiran Sarvadevabhatla","doi":"10.1109/ICDAR.2019.00164","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00164","url":null,"abstract":"Historical palm-leaf manuscript and early paper documents from Indian subcontinent form an important part of the world's literary and cultural heritage. Despite their importance, large-scale annotated Indic manuscript image datasets do not exist. To address this deficiency, we introduce Indiscapes, the first ever dataset with multi-regional layout annotations for historical Indic manuscripts. To address the challenge of large diversity in scripts and presence of dense, irregular layout elements (e.g. text lines, pictures, multiple documents per image), we adapt a Fully Convolutional Deep Neural Network architecture for fully automatic, instance-level spatial layout parsing of manuscript images. We demonstrate the effectiveness of proposed architecture on images from the Indiscapes dataset. For annotation flexibility and keeping the non-technical nature of domain experts in mind, we also contribute a custom, web-based GUI annotation tool and a dashboard-style analytics portal. Overall, our contributions set the stage for enabling downstream applications such as OCR and word-spotting in historical Indic manuscripts at scale.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121950923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Training-Free and Segmentation-Free Word Spotting using Feature Matching and Query Expansion 使用特征匹配和查询扩展的无训练和无分词点词
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00209
Ekta Vats, A. Hast, A. Fornés
{"title":"Training-Free and Segmentation-Free Word Spotting using Feature Matching and Query Expansion","authors":"Ekta Vats, A. Hast, A. Fornés","doi":"10.1109/ICDAR.2019.00209","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00209","url":null,"abstract":"Historical handwritten text recognition is an interesting yet challenging problem. In recent times, deep learning based methods have achieved significant performance in handwritten text recognition. However, handwriting recognition using deep learning needs training data, and often, text must be previously segmented into lines (or even words). These limitations constrain the application of HTR techniques in document collections, because training data or segmented words are not always available. Therefore, this paper proposes a training-free and segmentation-free word spotting approach that can be applied in unconstrained scenarios. The proposed word spotting framework is based on document query word expansion and relaxed feature matching algorithm, which can easily be parallelised. Since handwritten words posses distinct shape and characteristics, this work uses a combination of different keypoint detectors and Fourier-based descriptors to obtain a sufficient degree of relaxed matching. The effectiveness of the proposed method is empirically evaluated on well-known benchmark datasets using standard evaluation measures. The use of informative features along with query expansion significantly contributed in efficient performance of the proposed method.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"475 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131990135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
CNN Based Binarization of MultiSpectral Document Images 基于CNN的多光谱文档图像二值化
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00091
Fabian Hollaus, Simon Brenner, Robert Sablatnig
{"title":"CNN Based Binarization of MultiSpectral Document Images","authors":"Fabian Hollaus, Simon Brenner, Robert Sablatnig","doi":"10.1109/ICDAR.2019.00091","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00091","url":null,"abstract":"This work is concerned with the binarization of ancient manuscripts that have been imaged with a MultiSpectral Imaging (MSI) system. We introduce a new dataset for this purpose that is composed of 130 multispectral images taken from two medieval manuscripts. We propose to apply an end-to-end Convolutional Neural Network (CNN) for the segmentation of the historical writings. The performance of the CNN based method is superior compared to two state-of-the-art methods that are especially designed for multispectral document images. The CNN based method is also evaluated on a previous and smaller database, where its performance is slightly worse than the two state-of-the-art techniques.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130440192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Exploration of CNN Features for Online Handwriting Recognition 在线手写识别CNN特征的探索
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00138
S. Mandal, S. Prasanna, S. Sundaram
{"title":"Exploration of CNN Features for Online Handwriting Recognition","authors":"S. Mandal, S. Prasanna, S. Sundaram","doi":"10.1109/ICDAR.2019.00138","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00138","url":null,"abstract":"Recently, convolution neural network (CNN) has demonstrated its powerful ability in learning features particularly from image data. In this work, its capability of feature learning in online handwriting is explored, by constructing various CNN architectures. The developed CNNs can process online handwriting directly unlike the existing works that convert the online handwriting to an image to utilize the architecture. The first convolution layer accepts the sequence of (x; y) coordinates along the trace of the character as an input and outputs a convolved filtered signal. Thereafter, via alternating steps of convolution and Rectified Linear Unit layers, in a hierarchical fashion, we obtain a set of deep features that can be employed for classification. We utilize the proposed CNN features to develop a Support Vector Machine (SVM)-based character recognition system and an implicit-segmentation based large vocabulary word recognition system employing hidden Markov model (HMM) framework. To the best of our knowledge, this is the first work of its kind that applies CNN directly on the (x; y) coordinates of the online handwriting data. Experiments are carried out on two publicly available English online handwritten database: UNIPEN character and UNIPEN ICROW-03 word databases. The obtained results are promising over the reported works employing the point-based features.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134154443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast Distributional Smoothing for Regularization in CTC Applied to Text Recognition 基于快速分布平滑的CTC正则化算法在文本识别中的应用
2019 International Conference on Document Analysis and Recognition (ICDAR) Pub Date : 2019-09-01 DOI: 10.1109/ICDAR.2019.00056
Ryohei Tanaka, Soichiro Ono, Akio Furuhata
{"title":"Fast Distributional Smoothing for Regularization in CTC Applied to Text Recognition","authors":"Ryohei Tanaka, Soichiro Ono, Akio Furuhata","doi":"10.1109/ICDAR.2019.00056","DOIUrl":"https://doi.org/10.1109/ICDAR.2019.00056","url":null,"abstract":"Many recent text recognition studies achieved successful performance by applying a sequential-label prediction framework such as connectionist temporal classification. Meanwhile, regularization is known to be essential to avoid overfitting when training deep neural networks. Regularization techniques that allow for semi-supervised learning have a greater impact than those that do not. Among widely researched single-label regularization techniques, virtual adversarial training (VAT) performs successfully by smoothing posterior distributions around training data points. However, VAT is almost solely applied to single-label prediction tasks, not to sequential-label prediction tasks. This is because the number of possible candidates in the label sequence exponentially increases with the sequence length, making it impractical to calculate posterior distributions and the divergence between them. Investigating this problem, we have found that there is an easily computable upper bound for divergence. Here, we propose fast distributional smoothing (FDS) as a method for drastically reducing computational costs by minimizing this upper bound. FDS allows regularization at practical computational costs in both supervised and semi-supervised learning. An experiment under simple settings confirmed that upper-bound minimization decreases divergence. Experiments also show that FDS improves scene text recognition performance and enhances state-of-the-art regularization performance. Furthermore, experiments show that FDS enables efficient semi-supervised learning in sequential-label prediction tasks and that it outperforms a conventional semi-supervised method.","PeriodicalId":325437,"journal":{"name":"2019 International Conference on Document Analysis and Recognition (ICDAR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134511050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信