{"title":"验证和更新GRASP:临床预测工具分级和评估的循证框架","authors":"Mohamed Khalifa , Farah Magrabi , Blanca Gallego","doi":"10.1016/j.cmpbup.2024.100161","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>When selecting clinical predictive tools, clinicians are challenged with an overwhelming and ever-growing number, most of which have never been implemented or evaluated for effectiveness. The authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP). The objective of this study is to refine, validate GRASP, and assess its reliability for consistent application.</div></div><div><h3>Methods</h3><div>A mixed-methods study was conducted, involving an initial web-based survey for feedback from a wide group of international experts in clinical prediction to refine the GRASP framework, followed by reliability testing with two independent researchers assessing eight predictive tools. The survey involved 81 experts who rated agreement with the framework's criteria on a five-point Likert scale and provided qualitative feedback. The reliability of the GRASP framework was evaluated through interrater reliability testing using Spearman's rank correlation coefficient.</div></div><div><h3>Results</h3><div>The survey yielded strong agreement of the experts with the framework's evaluation criteria, overall average score: 4.35/5, highlighting the importance of predictive performance, usability, potential effect, and post-implementation impact in grading clinical predictive tools. Qualitative feedback led to significant refinements, including detailed categorisation of evidence levels and clearer representation of evaluation criteria. Interrater reliability testing showed high agreement between researchers and authors (0.994) and among researchers (0.988), indicating strong consistency in tool grading.</div></div><div><h3>Conclusion</h3><div>The GRASP framework provides a high-level, evidence-based, and comprehensive, yet simple and feasible, approach to evaluate, compare, and select the best clinical predictive tools, with strong expert agreement and high interrater reliability. It assists clinicians in selecting effective tools by grading them on the level of validation of predictive performance before implementation, usability and potential effect during planning for implementation, and post-implementation impact on healthcare processes and clinical outcomes. Future studies should focus on the framework's application in clinical settings and its impact on decision-making and guideline development.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100161"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Validating and updating GRASP: An evidence-based framework for grading and assessment of clinical predictive tools\",\"authors\":\"Mohamed Khalifa , Farah Magrabi , Blanca Gallego\",\"doi\":\"10.1016/j.cmpbup.2024.100161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><div>When selecting clinical predictive tools, clinicians are challenged with an overwhelming and ever-growing number, most of which have never been implemented or evaluated for effectiveness. The authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP). The objective of this study is to refine, validate GRASP, and assess its reliability for consistent application.</div></div><div><h3>Methods</h3><div>A mixed-methods study was conducted, involving an initial web-based survey for feedback from a wide group of international experts in clinical prediction to refine the GRASP framework, followed by reliability testing with two independent researchers assessing eight predictive tools. The survey involved 81 experts who rated agreement with the framework's criteria on a five-point Likert scale and provided qualitative feedback. The reliability of the GRASP framework was evaluated through interrater reliability testing using Spearman's rank correlation coefficient.</div></div><div><h3>Results</h3><div>The survey yielded strong agreement of the experts with the framework's evaluation criteria, overall average score: 4.35/5, highlighting the importance of predictive performance, usability, potential effect, and post-implementation impact in grading clinical predictive tools. Qualitative feedback led to significant refinements, including detailed categorisation of evidence levels and clearer representation of evaluation criteria. Interrater reliability testing showed high agreement between researchers and authors (0.994) and among researchers (0.988), indicating strong consistency in tool grading.</div></div><div><h3>Conclusion</h3><div>The GRASP framework provides a high-level, evidence-based, and comprehensive, yet simple and feasible, approach to evaluate, compare, and select the best clinical predictive tools, with strong expert agreement and high interrater reliability. It assists clinicians in selecting effective tools by grading them on the level of validation of predictive performance before implementation, usability and potential effect during planning for implementation, and post-implementation impact on healthcare processes and clinical outcomes. Future studies should focus on the framework's application in clinical settings and its impact on decision-making and guideline development.</div></div>\",\"PeriodicalId\":72670,\"journal\":{\"name\":\"Computer methods and programs in biomedicine update\",\"volume\":\"7 \",\"pages\":\"Article 100161\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer methods and programs in biomedicine update\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666990024000284\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer methods and programs in biomedicine update","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666990024000284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Validating and updating GRASP: An evidence-based framework for grading and assessment of clinical predictive tools
Background
When selecting clinical predictive tools, clinicians are challenged with an overwhelming and ever-growing number, most of which have never been implemented or evaluated for effectiveness. The authors developed an evidence-based framework for grading and assessment of predictive tools (GRASP). The objective of this study is to refine, validate GRASP, and assess its reliability for consistent application.
Methods
A mixed-methods study was conducted, involving an initial web-based survey for feedback from a wide group of international experts in clinical prediction to refine the GRASP framework, followed by reliability testing with two independent researchers assessing eight predictive tools. The survey involved 81 experts who rated agreement with the framework's criteria on a five-point Likert scale and provided qualitative feedback. The reliability of the GRASP framework was evaluated through interrater reliability testing using Spearman's rank correlation coefficient.
Results
The survey yielded strong agreement of the experts with the framework's evaluation criteria, overall average score: 4.35/5, highlighting the importance of predictive performance, usability, potential effect, and post-implementation impact in grading clinical predictive tools. Qualitative feedback led to significant refinements, including detailed categorisation of evidence levels and clearer representation of evaluation criteria. Interrater reliability testing showed high agreement between researchers and authors (0.994) and among researchers (0.988), indicating strong consistency in tool grading.
Conclusion
The GRASP framework provides a high-level, evidence-based, and comprehensive, yet simple and feasible, approach to evaluate, compare, and select the best clinical predictive tools, with strong expert agreement and high interrater reliability. It assists clinicians in selecting effective tools by grading them on the level of validation of predictive performance before implementation, usability and potential effect during planning for implementation, and post-implementation impact on healthcare processes and clinical outcomes. Future studies should focus on the framework's application in clinical settings and its impact on decision-making and guideline development.