A. Mervaala-Muroke , M. Lehto , K. Porkka , O. Brück
{"title":"DIPSS and DIPSS Plus risk scoring in myelofibrosis utilizing automated, electronic health record-integrated decision system","authors":"A. Mervaala-Muroke , M. Lehto , K. Porkka , O. Brück","doi":"10.1016/j.esmorw.2025.100196","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Automated risk scoring could reduce human errors and enhance consistency. The aim of the study was to investigate whether automation of myelofibrosis Dynamic Prognostic Scoring System (DIPSS) and DIPSS Plus scores could improve their prognostic accuracy.</div></div><div><h3>Materials and methods</h3><div>We built an automated, electronic health record (EHR)-integrated decision system, extracting risk score covariates from tabular source databases and patient journals using text mining. Physician-defined scores were obtained through manual chart review (DIPSS 12%, DIPSS Plus 21%) or manual calculations (DIPSS 88%, DIPSS Plus 79%) using the reported risk score covariates. We compared automated scores with physician-defined scores by their ability to predict overall survival, using Cox regression, C-index, and time-dependent area under the receiver operating characteristic (AUROC) values.</div></div><div><h3>Results</h3><div>We included real-world data of patients with myelofibrosis (<em>n</em> = 251) from the Helsinki University Hospital district, Finland, at the time of their diagnosis. Cox regression analyses demonstrated C-indices of 0.72/0.72 (DIPSS/DIPSS Plus) for automated scoring and 0.69/0.71 for physician-defined scoring. Yearly time-dependent AUROC values for 10-year overall survival varied 0.75-0.82/0.74-0.84 for automated scoring and 0.71-0.79/0.74-0.82 for physician-defined scoring. We validated the feasibility and performance (C-indices: 0.68/70 for automated scoring versus 0.66/67 for physician-defined scoring, AUROC ranges: 0.67-0.76/0.67-0.87 for automated scoring versus 0.65-0.74/0.65-0.79 for physician-defined scoring) of the automated model in an external dataset (<em>n</em> = 120 patients).</div></div><div><h3>Conclusions</h3><div>We present the first automated, EHR-integrated decision system for calculating DIPSS and DIPSS Plus scores. The accuracy of the scores was aligned with the physician-defined scores, but the availability of the scores was significantly improved, highlighting the need for machine-assisted scoring.</div></div>","PeriodicalId":100491,"journal":{"name":"ESMO Real World Data and Digital Oncology","volume":"10 ","pages":"Article 100196"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ESMO Real World Data and Digital Oncology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949820125000852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Automated risk scoring could reduce human errors and enhance consistency. The aim of the study was to investigate whether automation of myelofibrosis Dynamic Prognostic Scoring System (DIPSS) and DIPSS Plus scores could improve their prognostic accuracy.
Materials and methods
We built an automated, electronic health record (EHR)-integrated decision system, extracting risk score covariates from tabular source databases and patient journals using text mining. Physician-defined scores were obtained through manual chart review (DIPSS 12%, DIPSS Plus 21%) or manual calculations (DIPSS 88%, DIPSS Plus 79%) using the reported risk score covariates. We compared automated scores with physician-defined scores by their ability to predict overall survival, using Cox regression, C-index, and time-dependent area under the receiver operating characteristic (AUROC) values.
Results
We included real-world data of patients with myelofibrosis (n = 251) from the Helsinki University Hospital district, Finland, at the time of their diagnosis. Cox regression analyses demonstrated C-indices of 0.72/0.72 (DIPSS/DIPSS Plus) for automated scoring and 0.69/0.71 for physician-defined scoring. Yearly time-dependent AUROC values for 10-year overall survival varied 0.75-0.82/0.74-0.84 for automated scoring and 0.71-0.79/0.74-0.82 for physician-defined scoring. We validated the feasibility and performance (C-indices: 0.68/70 for automated scoring versus 0.66/67 for physician-defined scoring, AUROC ranges: 0.67-0.76/0.67-0.87 for automated scoring versus 0.65-0.74/0.65-0.79 for physician-defined scoring) of the automated model in an external dataset (n = 120 patients).
Conclusions
We present the first automated, EHR-integrated decision system for calculating DIPSS and DIPSS Plus scores. The accuracy of the scores was aligned with the physician-defined scores, but the availability of the scores was significantly improved, highlighting the need for machine-assisted scoring.