一种新的基于注意力的医学图像几何重建误差估计网络。

Linchen Qian, Jiasong Chen, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang
{"title":"一种新的基于注意力的医学图像几何重建误差估计网络。","authors":"Linchen Qian, Jiasong Chen, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang","doi":"10.1117/12.3038529","DOIUrl":null,"url":null,"abstract":"<p><p>Instance segmentation of anatomical structures from medical images can help enhance clinical outcomes such as disease diagnosis, surgical planning accuracy, and treatment efficacy. However, since segmentation masks often lack point-to-point correspondence between patients, instance segmentation masks may not be directly applicable for clinical studies that require measurement of medical parameters defined on an anatomical shape atlas. For such applications, meshes with correspondence between patients are preferred representations of object geometries. The conversion from segmentation masks to meshes can be error-prone due to segmentation artifacts, and therefore, it is desirable to directly obtain mesh representations of object geometries from medical image data, bypassing segmentation masks. In this work, we propose novel attention-based neural networks for geometry reconstruction and error estimation, which offers a direct pathway from medical images to high-quality mesh representations. We introduce an innovative attention-based feature extraction network and incorporate image self-attention and shape self-attention with cross-attention between them to capture consistent features. Based on the extracted features, we develop a geometry reconstruction network that deforms a mesh template to reconstruct the geometry of the object, which automatically ensures mesh correspondence. In addition, we design a shape error estimation network to evaluate the reliability of the output from geometry reconstruction, that is, to estimate the point-to-point error of a reconstructed geometry. We demonstrate our approach for the application of lumbar spine geometry reconstruction and compare our geometry reconstruction network with UNet++, UTNet, Swin UnetTR, SLT-Net, and nnUNet, using the Dice metric. In this application, our geometry reconstruction network has much higher accuracy and artifact-free segmentation results, and our shape error estimation network facilitates quality control for clinical use. The source code is available at https://github.com/linchenq/SPIE2025-GoemReconstruction-with-ShapeErrorNet.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13406 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439167/pdf/","citationCount":"0","resultStr":"{\"title\":\"A Novel Attention-based Network for Geometry Reconstruction with Error Estimation from Medical Images.\",\"authors\":\"Linchen Qian, Jiasong Chen, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang\",\"doi\":\"10.1117/12.3038529\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Instance segmentation of anatomical structures from medical images can help enhance clinical outcomes such as disease diagnosis, surgical planning accuracy, and treatment efficacy. However, since segmentation masks often lack point-to-point correspondence between patients, instance segmentation masks may not be directly applicable for clinical studies that require measurement of medical parameters defined on an anatomical shape atlas. For such applications, meshes with correspondence between patients are preferred representations of object geometries. The conversion from segmentation masks to meshes can be error-prone due to segmentation artifacts, and therefore, it is desirable to directly obtain mesh representations of object geometries from medical image data, bypassing segmentation masks. In this work, we propose novel attention-based neural networks for geometry reconstruction and error estimation, which offers a direct pathway from medical images to high-quality mesh representations. We introduce an innovative attention-based feature extraction network and incorporate image self-attention and shape self-attention with cross-attention between them to capture consistent features. Based on the extracted features, we develop a geometry reconstruction network that deforms a mesh template to reconstruct the geometry of the object, which automatically ensures mesh correspondence. In addition, we design a shape error estimation network to evaluate the reliability of the output from geometry reconstruction, that is, to estimate the point-to-point error of a reconstructed geometry. We demonstrate our approach for the application of lumbar spine geometry reconstruction and compare our geometry reconstruction network with UNet++, UTNet, Swin UnetTR, SLT-Net, and nnUNet, using the Dice metric. In this application, our geometry reconstruction network has much higher accuracy and artifact-free segmentation results, and our shape error estimation network facilitates quality control for clinical use. The source code is available at https://github.com/linchenq/SPIE2025-GoemReconstruction-with-ShapeErrorNet.</p>\",\"PeriodicalId\":74505,\"journal\":{\"name\":\"Proceedings of SPIE--the International Society for Optical Engineering\",\"volume\":\"13406 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439167/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of SPIE--the International Society for Optical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.3038529\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/4/11 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPIE--the International Society for Optical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3038529","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/11 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

从医学图像中对解剖结构进行实例分割有助于提高疾病诊断、手术计划准确性和治疗效果等临床结果。然而,由于分割口罩往往缺乏患者之间的点对点对应关系,实例分割口罩可能不直接适用于需要测量解剖形状图谱上定义的医学参数的临床研究。对于此类应用,具有患者之间对应关系的网格是物体几何形状的首选表示。由于分割伪影,从分割蒙版到网格的转换可能容易出错,因此,希望直接从医学图像数据中获得物体几何形状的网格表示,而绕过分割蒙版。在这项工作中,我们提出了一种新的基于注意力的神经网络用于几何重建和误差估计,它提供了从医学图像到高质量网格表示的直接途径。我们引入了一种创新的基于注意力的特征提取网络,将图像自注意和形状自注意结合起来,并结合它们之间的交叉注意来捕获一致的特征。基于提取的特征,我们开发了一个几何重建网络,该网络通过变形网格模板来重建物体的几何形状,从而自动保证网格的对应性。此外,我们设计了一个形状误差估计网络来评估几何重建输出的可靠性,即估计重建几何的点对点误差。我们展示了我们的方法用于腰椎几何重建的应用,并使用Dice度量将我们的几何重建网络与unnet++、UTNet、Swin UnetTR、SLT-Net和nnUNet进行比较。在这个应用中,我们的几何重建网络具有更高的精度和无伪影的分割结果,我们的形状误差估计网络便于临床使用的质量控制。源代码可从https://github.com/linchenq/SPIE2025-GoemReconstruction-with-ShapeErrorNet获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Novel Attention-based Network for Geometry Reconstruction with Error Estimation from Medical Images.

Instance segmentation of anatomical structures from medical images can help enhance clinical outcomes such as disease diagnosis, surgical planning accuracy, and treatment efficacy. However, since segmentation masks often lack point-to-point correspondence between patients, instance segmentation masks may not be directly applicable for clinical studies that require measurement of medical parameters defined on an anatomical shape atlas. For such applications, meshes with correspondence between patients are preferred representations of object geometries. The conversion from segmentation masks to meshes can be error-prone due to segmentation artifacts, and therefore, it is desirable to directly obtain mesh representations of object geometries from medical image data, bypassing segmentation masks. In this work, we propose novel attention-based neural networks for geometry reconstruction and error estimation, which offers a direct pathway from medical images to high-quality mesh representations. We introduce an innovative attention-based feature extraction network and incorporate image self-attention and shape self-attention with cross-attention between them to capture consistent features. Based on the extracted features, we develop a geometry reconstruction network that deforms a mesh template to reconstruct the geometry of the object, which automatically ensures mesh correspondence. In addition, we design a shape error estimation network to evaluate the reliability of the output from geometry reconstruction, that is, to estimate the point-to-point error of a reconstructed geometry. We demonstrate our approach for the application of lumbar spine geometry reconstruction and compare our geometry reconstruction network with UNet++, UTNet, Swin UnetTR, SLT-Net, and nnUNet, using the Dice metric. In this application, our geometry reconstruction network has much higher accuracy and artifact-free segmentation results, and our shape error estimation network facilitates quality control for clinical use. The source code is available at https://github.com/linchenq/SPIE2025-GoemReconstruction-with-ShapeErrorNet.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信