基于光谱自监督训练的光学相干层析成像轴向超分辨率研究

IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen
{"title":"基于光谱自监督训练的光学相干层析成像轴向超分辨率研究","authors":"Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen","doi":"10.1109/TCI.2025.3555134","DOIUrl":null,"url":null,"abstract":"High axial resolution in Optical Coherence Tomography (OCT) images is essential for accurately diagnosing skin conditions like psoriasis and keratoderma, where clear boundary delineation can reveal early disease markers. Existing deep learning super-resolution methods typically rely on intensity-based training, which only utilizes magnitude data from the OCT spectrum after Fourier transformation, limiting the reconstruction of fine boundary details. This study introduces a spectrum-based, self-supervised deep learning framework that leverages OCT spectral (fringe) data to improve axial resolution beyond system limits. By training the model directly on fringe data in a self-supervised manner, we achieve finer structural detail recovery. Evaluation metrics included Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and axial resolution estimation. Our framework yielded a 50% improvement in axial resolution, achieving 4.28 μm from 7.19 μm, along with PSNR gains of up to 3.37 dB and SSIM increases by 0.157, significantly enhancing boundary continuity and fine detail reconstruction. Our method surpasses intensity-based approaches in enhancing axial resolution and presents potential for iterative application to achieve even greater improvements. Significance: This framework advances OCT imaging, offering a promising, non-invasive tool for dermatological diagnostics.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"497-505"},"PeriodicalIF":4.2000,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Axial Super-Resolution in Optical Coherence Tomography Images via Spectrum-Based Self-Supervised Training\",\"authors\":\"Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen\",\"doi\":\"10.1109/TCI.2025.3555134\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"High axial resolution in Optical Coherence Tomography (OCT) images is essential for accurately diagnosing skin conditions like psoriasis and keratoderma, where clear boundary delineation can reveal early disease markers. Existing deep learning super-resolution methods typically rely on intensity-based training, which only utilizes magnitude data from the OCT spectrum after Fourier transformation, limiting the reconstruction of fine boundary details. This study introduces a spectrum-based, self-supervised deep learning framework that leverages OCT spectral (fringe) data to improve axial resolution beyond system limits. By training the model directly on fringe data in a self-supervised manner, we achieve finer structural detail recovery. Evaluation metrics included Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and axial resolution estimation. Our framework yielded a 50% improvement in axial resolution, achieving 4.28 μm from 7.19 μm, along with PSNR gains of up to 3.37 dB and SSIM increases by 0.157, significantly enhancing boundary continuity and fine detail reconstruction. Our method surpasses intensity-based approaches in enhancing axial resolution and presents potential for iterative application to achieve even greater improvements. Significance: This framework advances OCT imaging, offering a promising, non-invasive tool for dermatological diagnostics.\",\"PeriodicalId\":56022,\"journal\":{\"name\":\"IEEE Transactions on Computational Imaging\",\"volume\":\"11 \",\"pages\":\"497-505\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computational Imaging\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10945416/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10945416/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

光学相干断层扫描(OCT)图像的高轴向分辨率对于准确诊断牛皮癣和角化皮病等皮肤病至关重要,在这些疾病中,清晰的边界描绘可以揭示早期疾病标志物。现有的深度学习超分辨率方法通常依赖于基于强度的训练,仅利用傅里叶变换后OCT谱的幅度数据,限制了精细边界细节的重建。本研究引入了一种基于光谱的自监督深度学习框架,该框架利用OCT光谱(条纹)数据来提高超出系统限制的轴向分辨率。通过以自监督的方式直接在条纹数据上训练模型,我们实现了更精细的结构细节恢复。评估指标包括峰值信噪比(PSNR)、结构相似指数测量(SSIM)和轴向分辨率估计。该框架的轴向分辨率提高了50%,从7.19 μm提高到4.28 μm, PSNR增益高达3.37 dB, SSIM增加0.157,显著增强了边界连续性和精细细节重建。我们的方法在增强轴向分辨率方面超越了基于强度的方法,并呈现出迭代应用的潜力,以实现更大的改进。意义:该框架推进了OCT成像,为皮肤病诊断提供了一种有前途的非侵入性工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Axial Super-Resolution in Optical Coherence Tomography Images via Spectrum-Based Self-Supervised Training
High axial resolution in Optical Coherence Tomography (OCT) images is essential for accurately diagnosing skin conditions like psoriasis and keratoderma, where clear boundary delineation can reveal early disease markers. Existing deep learning super-resolution methods typically rely on intensity-based training, which only utilizes magnitude data from the OCT spectrum after Fourier transformation, limiting the reconstruction of fine boundary details. This study introduces a spectrum-based, self-supervised deep learning framework that leverages OCT spectral (fringe) data to improve axial resolution beyond system limits. By training the model directly on fringe data in a self-supervised manner, we achieve finer structural detail recovery. Evaluation metrics included Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and axial resolution estimation. Our framework yielded a 50% improvement in axial resolution, achieving 4.28 μm from 7.19 μm, along with PSNR gains of up to 3.37 dB and SSIM increases by 0.157, significantly enhancing boundary continuity and fine detail reconstruction. Our method surpasses intensity-based approaches in enhancing axial resolution and presents potential for iterative application to achieve even greater improvements. Significance: This framework advances OCT imaging, offering a promising, non-invasive tool for dermatological diagnostics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Computational Imaging
IEEE Transactions on Computational Imaging Mathematics-Computational Mathematics
CiteScore
8.20
自引率
7.40%
发文量
59
期刊介绍: The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信