3. 利用深度学习方法从x射线图像中提取解剖标志,实现脊柱术前手术的自动Lenke分类

IF 2.5 Q3 Medicine
AliAsghar Mohammadi Nasrabadi PhD , Gemah Moammer FRCSC , John McPhee PhD
{"title":"3. 利用深度学习方法从x射线图像中提取解剖标志,实现脊柱术前手术的自动Lenke分类","authors":"AliAsghar Mohammadi Nasrabadi PhD ,&nbsp;Gemah Moammer FRCSC ,&nbsp;John McPhee PhD","doi":"10.1016/j.xnsj.2025.100697","DOIUrl":null,"url":null,"abstract":"<div><h3>BACKGROUND CONTEXT</h3><div>Spinopelvic assessment (eg, SS, PT, PI, LL, TK, CL, SVA, and Cobb angle) is vital for preoperative spinal surgery planning but is often measured manually, leading to variability. Recent AI and deep learning methods improve automation and accuracy. While promising, these techniques face challenges including computational complexity, small test datasets, lack of surgeon validation, and limited robustness to varied image conditions.</div></div><div><h3>PURPOSE</h3><div>To increase accuracy, reduce complexity, and provide robust preoperative X-ray analysis, we propose a novel, physics-informed deep learning method based on mathematical spinal relations. This approach aims to automatically calculate lateral and AP spinal parameters and promptly perform Lenke classification for each patient.</div></div><div><h3>STUDY DESIGN/SETTING</h3><div>N/A</div></div><div><h3>PATIENT SAMPLE</h3><div>We collected 3500 lateral and AP spine X-rays from Grand River Hospital (GRH) in Kitchener, ON, Canada, between 2016 and 2024, encompassing hip/spine implants, varied postures, and poor-contrast or partially visible spines. Image processing filters enhanced annotation accuracy, allowing landmark detection even in incomplete images. The dataset includes conventional and EOS systems, enabling thorough performance evaluation and robust landmark detection. Data was split into 80% training, 10% validation, and 10% testing.</div></div><div><h3>OUTCOME MEASURES</h3><div>This study focuses on the automatic extraction of spinopelvic parameters and anatomical landmarks from lateral and AP X-ray images, including SS, PT, PI, LL, SVA, femur center, sacrum end plate, iliac crest, L1–L5, T12–T1, C7–C2, apex, Cobb angle, LSRS, TSM, and CSRS. These measurements enable Lenke classification, identifying curve types (1–6), lumbar modifiers (A, B, C), and thoracic modifiers (–, N, +). To evaluate performance, we use relative root mean square error (RRMSE) to compare predicted values (PR) with manual annotations (MA), while intraclass correlation coefficient (ICC) measures reliability among surgeons, MA, and PR.</div></div><div><h3>METHODS</h3><div>Using our developed physics-informed deep learning method, spinopelvic parameters were extracted from X-ray images and validated against manual annotations. Landmarks were detected as objects with geometric constraints derived from mathematical spinal relations. Performance, compared to three senior spine surgeons, demonstrated excellent correlation, with intraclass correlation coefficients exceeding 0.9, surpassing previously reported literature values. Additionally, we developed an algorithm leveraging these parameters to automate Lenke classification, identifying curve type (1–6), lumbar modifier (A,B,C), and thoracic modifier (–,N,+), significantly aiding triage and preoperative planning.</div></div><div><h3>RESULTS</h3><div>We evaluated our model on the dataset, achieving final accuracies of 93.1% (SS), 94.6% (PT), 93.4% (Cobb angle), 91.2% (LL), and 94.5% (SVA). Patient classification attained 98.5% accuracy via our automated Lenke-based algorithm. Overall, the model surpasses literature-reported accuracy, demonstrating robust performance and reliability. To compare against surgeons, we used the intraclass correlation coefficient (ICC) with three surgeons’ annotations, revealing stronger consistency than previously reported.</div></div><div><h3>CONCLUSIONS</h3><div>Our physics-informed deep learning method reliably automates spinopelvic parameter extraction and classification, achieving high accuracy and robust surgeon-level consistency, thus advancing preoperative spinal planning and guiding AI innovations.</div></div><div><h3>FDA Device/Drug Status</h3><div>This abstract does not discuss or include any applicable devices or drugs.</div></div>","PeriodicalId":34622,"journal":{"name":"North American Spine Society Journal","volume":"22 ","pages":"Article 100697"},"PeriodicalIF":2.5000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"3. Automated Lenke classification for preoperative spine surgery by extracting anatomical landmarks from X-ray images using a deep learning approach\",\"authors\":\"AliAsghar Mohammadi Nasrabadi PhD ,&nbsp;Gemah Moammer FRCSC ,&nbsp;John McPhee PhD\",\"doi\":\"10.1016/j.xnsj.2025.100697\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>BACKGROUND CONTEXT</h3><div>Spinopelvic assessment (eg, SS, PT, PI, LL, TK, CL, SVA, and Cobb angle) is vital for preoperative spinal surgery planning but is often measured manually, leading to variability. Recent AI and deep learning methods improve automation and accuracy. While promising, these techniques face challenges including computational complexity, small test datasets, lack of surgeon validation, and limited robustness to varied image conditions.</div></div><div><h3>PURPOSE</h3><div>To increase accuracy, reduce complexity, and provide robust preoperative X-ray analysis, we propose a novel, physics-informed deep learning method based on mathematical spinal relations. This approach aims to automatically calculate lateral and AP spinal parameters and promptly perform Lenke classification for each patient.</div></div><div><h3>STUDY DESIGN/SETTING</h3><div>N/A</div></div><div><h3>PATIENT SAMPLE</h3><div>We collected 3500 lateral and AP spine X-rays from Grand River Hospital (GRH) in Kitchener, ON, Canada, between 2016 and 2024, encompassing hip/spine implants, varied postures, and poor-contrast or partially visible spines. Image processing filters enhanced annotation accuracy, allowing landmark detection even in incomplete images. The dataset includes conventional and EOS systems, enabling thorough performance evaluation and robust landmark detection. Data was split into 80% training, 10% validation, and 10% testing.</div></div><div><h3>OUTCOME MEASURES</h3><div>This study focuses on the automatic extraction of spinopelvic parameters and anatomical landmarks from lateral and AP X-ray images, including SS, PT, PI, LL, SVA, femur center, sacrum end plate, iliac crest, L1–L5, T12–T1, C7–C2, apex, Cobb angle, LSRS, TSM, and CSRS. These measurements enable Lenke classification, identifying curve types (1–6), lumbar modifiers (A, B, C), and thoracic modifiers (–, N, +). To evaluate performance, we use relative root mean square error (RRMSE) to compare predicted values (PR) with manual annotations (MA), while intraclass correlation coefficient (ICC) measures reliability among surgeons, MA, and PR.</div></div><div><h3>METHODS</h3><div>Using our developed physics-informed deep learning method, spinopelvic parameters were extracted from X-ray images and validated against manual annotations. Landmarks were detected as objects with geometric constraints derived from mathematical spinal relations. Performance, compared to three senior spine surgeons, demonstrated excellent correlation, with intraclass correlation coefficients exceeding 0.9, surpassing previously reported literature values. Additionally, we developed an algorithm leveraging these parameters to automate Lenke classification, identifying curve type (1–6), lumbar modifier (A,B,C), and thoracic modifier (–,N,+), significantly aiding triage and preoperative planning.</div></div><div><h3>RESULTS</h3><div>We evaluated our model on the dataset, achieving final accuracies of 93.1% (SS), 94.6% (PT), 93.4% (Cobb angle), 91.2% (LL), and 94.5% (SVA). Patient classification attained 98.5% accuracy via our automated Lenke-based algorithm. Overall, the model surpasses literature-reported accuracy, demonstrating robust performance and reliability. To compare against surgeons, we used the intraclass correlation coefficient (ICC) with three surgeons’ annotations, revealing stronger consistency than previously reported.</div></div><div><h3>CONCLUSIONS</h3><div>Our physics-informed deep learning method reliably automates spinopelvic parameter extraction and classification, achieving high accuracy and robust surgeon-level consistency, thus advancing preoperative spinal planning and guiding AI innovations.</div></div><div><h3>FDA Device/Drug Status</h3><div>This abstract does not discuss or include any applicable devices or drugs.</div></div>\",\"PeriodicalId\":34622,\"journal\":{\"name\":\"North American Spine Society Journal\",\"volume\":\"22 \",\"pages\":\"Article 100697\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"North American Spine Society Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666548425001179\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"North American Spine Society Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666548425001179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

摘要

脊柱骨盆评估(如SS、PT、PI、LL、TK、CL、SVA和Cobb角)对术前脊柱手术计划至关重要,但通常是人工测量,导致差异。最近的人工智能和深度学习方法提高了自动化和准确性。虽然这些技术很有前景,但也面临着计算复杂性、测试数据集小、缺乏外科医生验证以及对不同图像条件的鲁棒性有限等挑战。为了提高准确性,降低复杂性,并提供强大的术前x线分析,我们提出了一种基于数学脊柱关系的新颖的物理信息深度学习方法。该方法旨在自动计算侧位和侧位脊柱参数,并及时对每位患者进行Lenke分类。研究设计/环境/患者样本:我们在2016年至2024年期间从加拿大安大略省基奇纳的大河医院(GRH)收集了3500张侧位和正位脊柱x光片,包括髋关节/脊柱植入物、不同姿势、对比度较差或部分可见的脊柱。图像处理过滤器提高了标注精度,即使在不完整的图像中也可以进行地标检测。该数据集包括传统和EOS系统,能够进行全面的性能评估和稳健的地标检测。数据被分成80%的训练、10%的验证和10%的测试。本研究重点从侧位和正位x线图像中自动提取脊柱参数和解剖标志,包括SS、PT、PI、LL、SVA、股骨中心、骶骨终板、髂骨、L1-L5、T12-T1、C7-C2、尖端、Cobb角、LSRS、TSM和CSRS。这些测量可以进行Lenke分类,识别曲线类型(1-6),腰椎调节因子(A, B, C)和胸部调节因子(-,N, +)。为了评估性能,我们使用相对均方根误差(RRMSE)来比较预测值(PR)与手动注释(MA),而类内相关系数(ICC)衡量外科医生、MA和PR之间的可靠性。方法使用我们开发的物理信息深度学习方法,从x射线图像中提取脊柱参数,并根据手动注释进行验证。地标被检测为具有几何约束的对象,这些约束来源于数学脊柱关系。与三位资深脊柱外科医生相比,表现出良好的相关性,类内相关系数超过0.9,超过先前报道的文献值。此外,我们开发了一种算法,利用这些参数来自动进行Lenke分类,识别曲线类型(1-6),腰椎修饰符(A,B,C)和胸椎修饰符(-,N,+),显著帮助分诊和术前计划。结果我们在数据集上评估了我们的模型,最终准确率为93.1% (SS), 94.6% (PT), 93.4% (Cobb角),91.2% (LL)和94.5% (SVA)。通过我们基于lenke的自动算法,患者分类准确率达到98.5%。总体而言,该模型超越了文献报道的准确性,展示了稳健的性能和可靠性。为了与外科医生进行比较,我们使用了三个外科医生注释的类内相关系数(ICC),显示出比以前报道的更强的一致性。结论基于物理的深度学习方法可靠地实现了脊柱参数的自动提取和分类,实现了高精度和鲁棒外科水平的一致性,从而推进了脊柱术前规划,指导了人工智能创新。FDA器械/药物状态本摘要不讨论或包括任何适用的器械或药物。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
3. Automated Lenke classification for preoperative spine surgery by extracting anatomical landmarks from X-ray images using a deep learning approach

BACKGROUND CONTEXT

Spinopelvic assessment (eg, SS, PT, PI, LL, TK, CL, SVA, and Cobb angle) is vital for preoperative spinal surgery planning but is often measured manually, leading to variability. Recent AI and deep learning methods improve automation and accuracy. While promising, these techniques face challenges including computational complexity, small test datasets, lack of surgeon validation, and limited robustness to varied image conditions.

PURPOSE

To increase accuracy, reduce complexity, and provide robust preoperative X-ray analysis, we propose a novel, physics-informed deep learning method based on mathematical spinal relations. This approach aims to automatically calculate lateral and AP spinal parameters and promptly perform Lenke classification for each patient.

STUDY DESIGN/SETTING

N/A

PATIENT SAMPLE

We collected 3500 lateral and AP spine X-rays from Grand River Hospital (GRH) in Kitchener, ON, Canada, between 2016 and 2024, encompassing hip/spine implants, varied postures, and poor-contrast or partially visible spines. Image processing filters enhanced annotation accuracy, allowing landmark detection even in incomplete images. The dataset includes conventional and EOS systems, enabling thorough performance evaluation and robust landmark detection. Data was split into 80% training, 10% validation, and 10% testing.

OUTCOME MEASURES

This study focuses on the automatic extraction of spinopelvic parameters and anatomical landmarks from lateral and AP X-ray images, including SS, PT, PI, LL, SVA, femur center, sacrum end plate, iliac crest, L1–L5, T12–T1, C7–C2, apex, Cobb angle, LSRS, TSM, and CSRS. These measurements enable Lenke classification, identifying curve types (1–6), lumbar modifiers (A, B, C), and thoracic modifiers (–, N, +). To evaluate performance, we use relative root mean square error (RRMSE) to compare predicted values (PR) with manual annotations (MA), while intraclass correlation coefficient (ICC) measures reliability among surgeons, MA, and PR.

METHODS

Using our developed physics-informed deep learning method, spinopelvic parameters were extracted from X-ray images and validated against manual annotations. Landmarks were detected as objects with geometric constraints derived from mathematical spinal relations. Performance, compared to three senior spine surgeons, demonstrated excellent correlation, with intraclass correlation coefficients exceeding 0.9, surpassing previously reported literature values. Additionally, we developed an algorithm leveraging these parameters to automate Lenke classification, identifying curve type (1–6), lumbar modifier (A,B,C), and thoracic modifier (–,N,+), significantly aiding triage and preoperative planning.

RESULTS

We evaluated our model on the dataset, achieving final accuracies of 93.1% (SS), 94.6% (PT), 93.4% (Cobb angle), 91.2% (LL), and 94.5% (SVA). Patient classification attained 98.5% accuracy via our automated Lenke-based algorithm. Overall, the model surpasses literature-reported accuracy, demonstrating robust performance and reliability. To compare against surgeons, we used the intraclass correlation coefficient (ICC) with three surgeons’ annotations, revealing stronger consistency than previously reported.

CONCLUSIONS

Our physics-informed deep learning method reliably automates spinopelvic parameter extraction and classification, achieving high accuracy and robust surgeon-level consistency, thus advancing preoperative spinal planning and guiding AI innovations.

FDA Device/Drug Status

This abstract does not discuss or include any applicable devices or drugs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.80
自引率
0.00%
发文量
71
审稿时长
48 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信