Alix Bird , Lauren Oakden-Rayner , Katrina Chakradeo , Ranjeny Thomas , Drishti Gupta , Suyash Jain , Rohan Jacob , Shonket Ray , Mihir D Wechalekar , Susanna Proudman , Lyle J. Palmer
{"title":"AI automated radiographic scoring in rheumatoid arthritis: Shedding light on barriers to implementation through comprehensive evaluation","authors":"Alix Bird , Lauren Oakden-Rayner , Katrina Chakradeo , Ranjeny Thomas , Drishti Gupta , Suyash Jain , Rohan Jacob , Shonket Ray , Mihir D Wechalekar , Susanna Proudman , Lyle J. Palmer","doi":"10.1016/j.semarthrit.2025.152761","DOIUrl":null,"url":null,"abstract":"<div><h3>Objectives</h3><div>Artificial intelligence (AI) has demonstrated the potential to improve efficiency and reliability of radiographic scoring in rheumatoid arthritis but lacks sufficient evidence to justify clinical use. We developed and rigorously validated a deep learning model to automate radiographic scoring against two external test sets, drawing upon state of the art reporting guidelines to clarify present barriers to implementation.</div></div><div><h3>Methods</h3><div>AI algorithms were trained to predict the Sharp van der Heijde score in hands and feet using a cohort of 157 patients and 1470 radiographs. External replication was undertaken in test datasets from two hospitals (n=253 patients, 589 radiographs). Alongside standard performance metrics to measure error and agreement, we reported subgroup performance, conducted an exploratory analysis of error, and demonstrated relationships with functional outcomes.</div></div><div><h3>Results</h3><div>Our AI system underperformed compared to manual scoring, with lower agreement between the AI and consensus score than between the two manual scorers. The AI system was better at ranking scores than achieving absolute agreement, with intraclass correlation coefficients ranging from 0.03 to 0.27 while Spearman’s correlation coefficients were consistently higher, ranging from 0.16 to 0.55.</div></div><div><h3>Conclusions</h3><div>The performance of the AI systems developed for automating radiographic scoring in RA is insufficient to justify use in research or clinical practice. Large, diverse, and thoroughly described longitudinal datasets will be indispensable in the development and rigorous evaluation of algorithms. Achieving this is key to the ongoing precise evaluation of clinical outcomes in rheumatoid arthritis to enable further improvements to patient care.</div></div>","PeriodicalId":21715,"journal":{"name":"Seminars in arthritis and rheumatism","volume":"74 ","pages":"Article 152761"},"PeriodicalIF":4.6000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seminars in arthritis and rheumatism","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0049017225001325","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RHEUMATOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives
Artificial intelligence (AI) has demonstrated the potential to improve efficiency and reliability of radiographic scoring in rheumatoid arthritis but lacks sufficient evidence to justify clinical use. We developed and rigorously validated a deep learning model to automate radiographic scoring against two external test sets, drawing upon state of the art reporting guidelines to clarify present barriers to implementation.
Methods
AI algorithms were trained to predict the Sharp van der Heijde score in hands and feet using a cohort of 157 patients and 1470 radiographs. External replication was undertaken in test datasets from two hospitals (n=253 patients, 589 radiographs). Alongside standard performance metrics to measure error and agreement, we reported subgroup performance, conducted an exploratory analysis of error, and demonstrated relationships with functional outcomes.
Results
Our AI system underperformed compared to manual scoring, with lower agreement between the AI and consensus score than between the two manual scorers. The AI system was better at ranking scores than achieving absolute agreement, with intraclass correlation coefficients ranging from 0.03 to 0.27 while Spearman’s correlation coefficients were consistently higher, ranging from 0.16 to 0.55.
Conclusions
The performance of the AI systems developed for automating radiographic scoring in RA is insufficient to justify use in research or clinical practice. Large, diverse, and thoroughly described longitudinal datasets will be indispensable in the development and rigorous evaluation of algorithms. Achieving this is key to the ongoing precise evaluation of clinical outcomes in rheumatoid arthritis to enable further improvements to patient care.
期刊介绍:
Seminars in Arthritis and Rheumatism provides access to the highest-quality clinical, therapeutic and translational research about arthritis, rheumatology and musculoskeletal disorders that affect the joints and connective tissue. Each bimonthly issue includes articles giving you the latest diagnostic criteria, consensus statements, systematic reviews and meta-analyses as well as clinical and translational research studies. Read this journal for the latest groundbreaking research and to gain insights from scientists and clinicians on the management and treatment of musculoskeletal and autoimmune rheumatologic diseases. The journal is of interest to rheumatologists, orthopedic surgeons, internal medicine physicians, immunologists and specialists in bone and mineral metabolism.