Optimizing µCT resolution in tarsal bones: A comparative study of super-resolution models for trabecular bone analysis

Sascha Senck , Patrick Weinberger , Lukas Nepelius , Andreas Haghofer , Birgit Woegerer , Jonathan Glinz , Miroslav Yosifov , Lukas Behammer , Johann Kastner , Klemens Trieb , Elena Kranioti , Stephan Winkler
{"title":"Optimizing µCT resolution in tarsal bones: A comparative study of super-resolution models for trabecular bone analysis","authors":"Sascha Senck ,&nbsp;Patrick Weinberger ,&nbsp;Lukas Nepelius ,&nbsp;Andreas Haghofer ,&nbsp;Birgit Woegerer ,&nbsp;Jonathan Glinz ,&nbsp;Miroslav Yosifov ,&nbsp;Lukas Behammer ,&nbsp;Johann Kastner ,&nbsp;Klemens Trieb ,&nbsp;Elena Kranioti ,&nbsp;Stephan Winkler","doi":"10.1016/j.tmater.2025.100063","DOIUrl":null,"url":null,"abstract":"<div><div>Microcomputed tomography (µCT) is an essential tool for analyzing trabecular bone microarchitecture, yet its resolution is constrained by object size and acquisition time. To overcome these limitations, we implement a deep-learning-based super-resolution (SR) approach that enhances µCT image resolution while significantly reducing scan durations. Dry isolated tarsal bones (intermediate cuneiform) from 20 specimens were scanned using µCT at two resolutions, 80 µm voxel size (low resolution, LowRes) and 20 µm voxel size (high resolution, HiRes). Aligned LowRes and HiRes µCT data served as training data for SR reconstruction. In this study, we compare five SR models: 2D U-Net+ +, 3D SRCNN, 3D FSRCNN, 3D U-Net and a modified 3D U-Net model trained with a combined learned perceptual image patch similarity (LPIPS) and structural similarity (SSIM) loss function. The focus of this contribution is the application of these models based on real µCT data, rather than synthetically degraded images. Models were trained to learn volumetric representations for accurate restoration of trabecular bone microstructure. To assess SR image quality, we computed three image quality metrics (peak signal-to-noise ratio, SSIM and LPIPS) and evaluated bone morphometric parameters, i.e. average trabecular thickness (Tb.Th.) and bone volume fraction (BV/TV), across 95 regions of interest (ROI). RMSE was calculated for LowRes data and each SR model relative to HiRes data to quantify prediction accuracy. The results demonstrate that the 3D U-Net (LPIPS &amp; SSIM) model achieves the highest reconstruction accuracy, yielding the lowest RMSE values (12.93 µm for Tb.Th. and 1.3 % for BV/TV), outperforming all other SR models in our evaluation. Compared to standard low-resolution µCT, our approach reduces scan time from 58 min to 7 min per sample while preserving trabecular morphology with high fidelity. These results demonstrate the effectiveness of perceptual loss-based SR to real µCT data for morphological analysis, ensuring accurate trabecular reconstruction and mitigating overestimation artifacts caused by LowRes imaging and partial volume effects. Integrating SR with real µCT scans offers a promising strategy to reduce scan time to improve throughput in bone imaging workflows. Future work will expand the training dataset to enhance model generalization across diverse bone structures and imaging conditions.</div></div>","PeriodicalId":101254,"journal":{"name":"Tomography of Materials and Structures","volume":"8 ","pages":"Article 100063"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tomography of Materials and Structures","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949673X25000166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Microcomputed tomography (µCT) is an essential tool for analyzing trabecular bone microarchitecture, yet its resolution is constrained by object size and acquisition time. To overcome these limitations, we implement a deep-learning-based super-resolution (SR) approach that enhances µCT image resolution while significantly reducing scan durations. Dry isolated tarsal bones (intermediate cuneiform) from 20 specimens were scanned using µCT at two resolutions, 80 µm voxel size (low resolution, LowRes) and 20 µm voxel size (high resolution, HiRes). Aligned LowRes and HiRes µCT data served as training data for SR reconstruction. In this study, we compare five SR models: 2D U-Net+ +, 3D SRCNN, 3D FSRCNN, 3D U-Net and a modified 3D U-Net model trained with a combined learned perceptual image patch similarity (LPIPS) and structural similarity (SSIM) loss function. The focus of this contribution is the application of these models based on real µCT data, rather than synthetically degraded images. Models were trained to learn volumetric representations for accurate restoration of trabecular bone microstructure. To assess SR image quality, we computed three image quality metrics (peak signal-to-noise ratio, SSIM and LPIPS) and evaluated bone morphometric parameters, i.e. average trabecular thickness (Tb.Th.) and bone volume fraction (BV/TV), across 95 regions of interest (ROI). RMSE was calculated for LowRes data and each SR model relative to HiRes data to quantify prediction accuracy. The results demonstrate that the 3D U-Net (LPIPS & SSIM) model achieves the highest reconstruction accuracy, yielding the lowest RMSE values (12.93 µm for Tb.Th. and 1.3 % for BV/TV), outperforming all other SR models in our evaluation. Compared to standard low-resolution µCT, our approach reduces scan time from 58 min to 7 min per sample while preserving trabecular morphology with high fidelity. These results demonstrate the effectiveness of perceptual loss-based SR to real µCT data for morphological analysis, ensuring accurate trabecular reconstruction and mitigating overestimation artifacts caused by LowRes imaging and partial volume effects. Integrating SR with real µCT scans offers a promising strategy to reduce scan time to improve throughput in bone imaging workflows. Future work will expand the training dataset to enhance model generalization across diverse bone structures and imaging conditions.
优化跗骨的 µCT 分辨率:用于骨小梁分析的超分辨率模型比较研究
微计算机断层扫描(µCT)是分析骨小梁微结构的重要工具,但其分辨率受对象大小和采集时间的限制。为了克服这些限制,我们实现了一种基于深度学习的超分辨率(SR)方法,该方法可以提高微CT图像分辨率,同时显着缩短扫描持续时间。用微CT扫描20个标本的干离体跗骨(中间楔形),两种分辨率分别为80 µm体素大小(低分辨率,LowRes)和20 µm体素大小(高分辨率,HiRes)。对齐的LowRes和HiResµCT数据作为SR重建的训练数据。在这项研究中,我们比较了五种SR模型:2D U-Net+ +,3D SRCNN, 3D FSRCNN, 3D U-Net和一个改进的3D U-Net模型,该模型使用了学习感知图像补丁相似度(LPIPS)和结构相似度(SSIM)损失函数联合训练。这一贡献的重点是基于真实微CT数据的这些模型的应用,而不是综合退化的图像。模型被训练以学习体积表征,以准确地恢复小梁骨微观结构。为了评估SR图像质量,我们计算了三个图像质量指标(峰值信噪比、SSIM和LPIPS),并评估了骨形态测量参数,即95个感兴趣区域(ROI)的平均小梁厚度(Tb.Th.)和骨体积分数(BV/TV)。计算了LowRes数据和每个SR模型相对于HiRes数据的RMSE,以量化预测精度。结果表明,三维U-Net (LPIPS &;SSIM模型的重建精度最高,RMSE值最低(12.93 µm)。BV/TV为1.3 %),在我们的评估中优于所有其他SR模型。与标准的低分辨率微CT相比,我们的方法将每个样品的扫描时间从58 min减少到7 min,同时高保真地保留小梁形态。这些结果证明了基于感知损失的SR对真实微CT数据进行形态学分析的有效性,确保了准确的小梁重建,减轻了由低分辨率成像和部分体积效应引起的高估伪影。将SR与真实的微CT扫描相结合,提供了一种有前途的策略,可以减少扫描时间,提高骨成像工作流程的吞吐量。未来的工作将扩展训练数据集,以增强模型在不同骨骼结构和成像条件下的泛化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信