DIPSS and DIPSS Plus risk scoring in myelofibrosis utilizing automated, electronic health record-integrated decision system

A. Mervaala-Muroke , M. Lehto , K. Porkka , O. Brück
{"title":"DIPSS and DIPSS Plus risk scoring in myelofibrosis utilizing automated, electronic health record-integrated decision system","authors":"A. Mervaala-Muroke ,&nbsp;M. Lehto ,&nbsp;K. Porkka ,&nbsp;O. Brück","doi":"10.1016/j.esmorw.2025.100196","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Automated risk scoring could reduce human errors and enhance consistency. The aim of the study was to investigate whether automation of myelofibrosis Dynamic Prognostic Scoring System (DIPSS) and DIPSS Plus scores could improve their prognostic accuracy.</div></div><div><h3>Materials and methods</h3><div>We built an automated, electronic health record (EHR)-integrated decision system, extracting risk score covariates from tabular source databases and patient journals using text mining. Physician-defined scores were obtained through manual chart review (DIPSS 12%, DIPSS Plus 21%) or manual calculations (DIPSS 88%, DIPSS Plus 79%) using the reported risk score covariates. We compared automated scores with physician-defined scores by their ability to predict overall survival, using Cox regression, C-index, and time-dependent area under the receiver operating characteristic (AUROC) values.</div></div><div><h3>Results</h3><div>We included real-world data of patients with myelofibrosis (<em>n</em> = 251) from the Helsinki University Hospital district, Finland, at the time of their diagnosis. Cox regression analyses demonstrated C-indices of 0.72/0.72 (DIPSS/DIPSS Plus) for automated scoring and 0.69/0.71 for physician-defined scoring. Yearly time-dependent AUROC values for 10-year overall survival varied 0.75-0.82/0.74-0.84 for automated scoring and 0.71-0.79/0.74-0.82 for physician-defined scoring. We validated the feasibility and performance (C-indices: 0.68/70 for automated scoring versus 0.66/67 for physician-defined scoring, AUROC ranges: 0.67-0.76/0.67-0.87 for automated scoring versus 0.65-0.74/0.65-0.79 for physician-defined scoring) of the automated model in an external dataset (<em>n</em> = 120 patients).</div></div><div><h3>Conclusions</h3><div>We present the first automated, EHR-integrated decision system for calculating DIPSS and DIPSS Plus scores. The accuracy of the scores was aligned with the physician-defined scores, but the availability of the scores was significantly improved, highlighting the need for machine-assisted scoring.</div></div>","PeriodicalId":100491,"journal":{"name":"ESMO Real World Data and Digital Oncology","volume":"10 ","pages":"Article 100196"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ESMO Real World Data and Digital Oncology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949820125000852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Automated risk scoring could reduce human errors and enhance consistency. The aim of the study was to investigate whether automation of myelofibrosis Dynamic Prognostic Scoring System (DIPSS) and DIPSS Plus scores could improve their prognostic accuracy.

Materials and methods

We built an automated, electronic health record (EHR)-integrated decision system, extracting risk score covariates from tabular source databases and patient journals using text mining. Physician-defined scores were obtained through manual chart review (DIPSS 12%, DIPSS Plus 21%) or manual calculations (DIPSS 88%, DIPSS Plus 79%) using the reported risk score covariates. We compared automated scores with physician-defined scores by their ability to predict overall survival, using Cox regression, C-index, and time-dependent area under the receiver operating characteristic (AUROC) values.

Results

We included real-world data of patients with myelofibrosis (n = 251) from the Helsinki University Hospital district, Finland, at the time of their diagnosis. Cox regression analyses demonstrated C-indices of 0.72/0.72 (DIPSS/DIPSS Plus) for automated scoring and 0.69/0.71 for physician-defined scoring. Yearly time-dependent AUROC values for 10-year overall survival varied 0.75-0.82/0.74-0.84 for automated scoring and 0.71-0.79/0.74-0.82 for physician-defined scoring. We validated the feasibility and performance (C-indices: 0.68/70 for automated scoring versus 0.66/67 for physician-defined scoring, AUROC ranges: 0.67-0.76/0.67-0.87 for automated scoring versus 0.65-0.74/0.65-0.79 for physician-defined scoring) of the automated model in an external dataset (n = 120 patients).

Conclusions

We present the first automated, EHR-integrated decision system for calculating DIPSS and DIPSS Plus scores. The accuracy of the scores was aligned with the physician-defined scores, but the availability of the scores was significantly improved, highlighting the need for machine-assisted scoring.
利用自动化电子健康记录综合决策系统对骨髓纤维化进行DIPSS和DIPSS Plus风险评分
背景:自动化风险评分可以减少人为错误并增强一致性。该研究的目的是研究骨髓纤维化动态预后评分系统(DIPSS)和DIPSS Plus评分的自动化是否可以提高其预后准确性。材料和方法我们建立了一个自动化的电子健康记录(EHR)集成决策系统,使用文本挖掘从表格源数据库和患者期刊中提取风险评分协变量。医生定义的评分是通过使用报告的风险评分协变量进行人工图表回顾(DIPSS 12%, DIPSS + 21%)或人工计算(DIPSS 88%, DIPSS + 79%)获得的。我们使用Cox回归、c指数和受试者工作特征(AUROC)值下的时间依赖面积,比较了自动评分与医生定义评分预测总生存的能力。结果:我们纳入了来自芬兰赫尔辛基大学医院区的骨髓纤维化患者(n = 251)在诊断时的真实数据。Cox回归分析显示,自动评分的c指数为0.72/0.72 (DIPSS/DIPSS Plus),医生定义评分的c指数为0.69/0.71。每年随时间变化的10年总生存率AUROC值,自动评分为0.75-0.82/0.74-0.84,医师定义评分为0.71-0.79/0.74-0.82。我们在外部数据集(n = 120例患者)中验证了自动化模型的可行性和性能(c指数:自动评分0.68/70,医生定义评分0.66/67;AUROC范围:自动评分0.67-0.76/0.67-0.87,医生定义评分0.65-0.74/0.65-0.79)。我们提出了第一个用于计算DIPSS和DIPSS Plus分数的自动化、ehr集成决策系统。分数的准确性与医生定义的分数一致,但分数的可用性显着提高,突出了对机器辅助评分的需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信