{"title":"HCCD:用于在不同退化条件下增强文档的手写相机捕获数据集","authors":"K.S. Koushik , Bipin Nair B J , N. Shobha Rani","doi":"10.1016/j.dib.2025.111849","DOIUrl":null,"url":null,"abstract":"<div><div>Enhancing degraded handwritten documents captured with smartphone cameras remains a significant challenge in document analysis. Although deep learning-based enhancement techniques have shown promise, the performance of deep learning models largely relies on the availability of meticulously labeled ground truth datasets. To address this gap, in this study, the Handwritten Camera-Captured Dataset (HCCD) is introduced to support document enhancement and recognition tasks specific to real-world scenarios. Unlike existing datasets, which are captured in controlled environments with scanners or smartphone cameras, HCCD features real-time, camera-captured handwritten documents exhibiting a range of natural degradations. The degradation issues encompass motion blur, shadow artifacts, and uneven lighting, which reflect challenges incurred in the real-life document digitization process.</div><div>In the proposed dataset, each handwritten document is paired with a high-quality enhanced image created through a combination of computer vision-based imaging techniques. The documents are in Roman script and were contributed by multiple individuals with varying handwriting styles. The dataset is valuable for machine learning/ deep learning-based training for image restoration, denoising, and OCR applications. Each sample is annotated with rich metadata for further targeted research, including degradation type, severity level, and writer-specific demographics.</div></div>","PeriodicalId":10973,"journal":{"name":"Data in Brief","volume":"61 ","pages":"Article 111849"},"PeriodicalIF":1.0000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HCCD: A handwritten camera-captured dataset for document enhancement under varied degradation conditions\",\"authors\":\"K.S. Koushik , Bipin Nair B J , N. Shobha Rani\",\"doi\":\"10.1016/j.dib.2025.111849\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Enhancing degraded handwritten documents captured with smartphone cameras remains a significant challenge in document analysis. Although deep learning-based enhancement techniques have shown promise, the performance of deep learning models largely relies on the availability of meticulously labeled ground truth datasets. To address this gap, in this study, the Handwritten Camera-Captured Dataset (HCCD) is introduced to support document enhancement and recognition tasks specific to real-world scenarios. Unlike existing datasets, which are captured in controlled environments with scanners or smartphone cameras, HCCD features real-time, camera-captured handwritten documents exhibiting a range of natural degradations. The degradation issues encompass motion blur, shadow artifacts, and uneven lighting, which reflect challenges incurred in the real-life document digitization process.</div><div>In the proposed dataset, each handwritten document is paired with a high-quality enhanced image created through a combination of computer vision-based imaging techniques. The documents are in Roman script and were contributed by multiple individuals with varying handwriting styles. The dataset is valuable for machine learning/ deep learning-based training for image restoration, denoising, and OCR applications. Each sample is annotated with rich metadata for further targeted research, including degradation type, severity level, and writer-specific demographics.</div></div>\",\"PeriodicalId\":10973,\"journal\":{\"name\":\"Data in Brief\",\"volume\":\"61 \",\"pages\":\"Article 111849\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data in Brief\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352340925005761\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data in Brief","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352340925005761","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
HCCD: A handwritten camera-captured dataset for document enhancement under varied degradation conditions
Enhancing degraded handwritten documents captured with smartphone cameras remains a significant challenge in document analysis. Although deep learning-based enhancement techniques have shown promise, the performance of deep learning models largely relies on the availability of meticulously labeled ground truth datasets. To address this gap, in this study, the Handwritten Camera-Captured Dataset (HCCD) is introduced to support document enhancement and recognition tasks specific to real-world scenarios. Unlike existing datasets, which are captured in controlled environments with scanners or smartphone cameras, HCCD features real-time, camera-captured handwritten documents exhibiting a range of natural degradations. The degradation issues encompass motion blur, shadow artifacts, and uneven lighting, which reflect challenges incurred in the real-life document digitization process.
In the proposed dataset, each handwritten document is paired with a high-quality enhanced image created through a combination of computer vision-based imaging techniques. The documents are in Roman script and were contributed by multiple individuals with varying handwriting styles. The dataset is valuable for machine learning/ deep learning-based training for image restoration, denoising, and OCR applications. Each sample is annotated with rich metadata for further targeted research, including degradation type, severity level, and writer-specific demographics.
期刊介绍:
Data in Brief provides a way for researchers to easily share and reuse each other''s datasets by publishing data articles that: -Thoroughly describe your data, facilitating reproducibility. -Make your data, which is often buried in supplementary material, easier to find. -Increase traffic towards associated research articles and data, leading to more citations. -Open up doors for new collaborations. Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.