{"title":"急诊科笔记质量评分工具的开发:改进的德尔菲法。","authors":"Daniel Z Foster, Stuart L Douglas, Akshay Rajaram","doi":"10.1007/s43678-025-00914-5","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Emergency department (ED) documentation is essential for patient care. Accepted standards are required to teach best practices; however, tools to assess ED note quality are deficient, either lacking validation or performing poorly. We sought to develop a tool for assessing ED note quality for lower acuity visits in patients 16 years of age or older.</p><p><strong>Methods: </strong>We employed a modified Delphi approach with two rounds of electronic surveys. We invited 40 Canadian emergency physicians to serve as experts. In round one, we gathered feedback on dimensions (content elements, attributes, and scoring) relevant to ED note quality. Using these data, we derived a draft tool which was shared with the experts in round two, and then modified based on their feedback. Outcome data included survey response rates, and quantitative and qualitative feedback.</p><p><strong>Results: </strong>Response rates were 44% (n = 17) and 47% (n = 8) for the first and second rounds. Key perspectives from round one emphasized differences between broadly applicable (\"universal\") versus context-specific (\"conditional\") elements, the importance of certain attributes, and a binary scoring system. The authors drew on perspectives to develop a tool with eight universal and 16 conditional elements, four attributes, scored using a binary system. Feedback from the second round recommended minor changes, but demonstrated consensus on the tool's stated function.</p><p><strong>Conclusion: </strong>We developed the Tool for ED Note Quality. Limitations include a small sample size and a focus on physician perspectives. Next steps include generation of evidence for validity and refinement of the scoring system. Once validated, the tool may be used in assessing ED note quality for the purposes of medical education, quality improvement, and digital health research.</p>","PeriodicalId":93937,"journal":{"name":"CJEM","volume":" ","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Development of a scoring tool for emergency department note quality: a modified Delphi approach.\",\"authors\":\"Daniel Z Foster, Stuart L Douglas, Akshay Rajaram\",\"doi\":\"10.1007/s43678-025-00914-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>Emergency department (ED) documentation is essential for patient care. Accepted standards are required to teach best practices; however, tools to assess ED note quality are deficient, either lacking validation or performing poorly. We sought to develop a tool for assessing ED note quality for lower acuity visits in patients 16 years of age or older.</p><p><strong>Methods: </strong>We employed a modified Delphi approach with two rounds of electronic surveys. We invited 40 Canadian emergency physicians to serve as experts. In round one, we gathered feedback on dimensions (content elements, attributes, and scoring) relevant to ED note quality. Using these data, we derived a draft tool which was shared with the experts in round two, and then modified based on their feedback. Outcome data included survey response rates, and quantitative and qualitative feedback.</p><p><strong>Results: </strong>Response rates were 44% (n = 17) and 47% (n = 8) for the first and second rounds. Key perspectives from round one emphasized differences between broadly applicable (\\\"universal\\\") versus context-specific (\\\"conditional\\\") elements, the importance of certain attributes, and a binary scoring system. The authors drew on perspectives to develop a tool with eight universal and 16 conditional elements, four attributes, scored using a binary system. Feedback from the second round recommended minor changes, but demonstrated consensus on the tool's stated function.</p><p><strong>Conclusion: </strong>We developed the Tool for ED Note Quality. Limitations include a small sample size and a focus on physician perspectives. Next steps include generation of evidence for validity and refinement of the scoring system. Once validated, the tool may be used in assessing ED note quality for the purposes of medical education, quality improvement, and digital health research.</p>\",\"PeriodicalId\":93937,\"journal\":{\"name\":\"CJEM\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CJEM\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s43678-025-00914-5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CJEM","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s43678-025-00914-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Development of a scoring tool for emergency department note quality: a modified Delphi approach.
Objective: Emergency department (ED) documentation is essential for patient care. Accepted standards are required to teach best practices; however, tools to assess ED note quality are deficient, either lacking validation or performing poorly. We sought to develop a tool for assessing ED note quality for lower acuity visits in patients 16 years of age or older.
Methods: We employed a modified Delphi approach with two rounds of electronic surveys. We invited 40 Canadian emergency physicians to serve as experts. In round one, we gathered feedback on dimensions (content elements, attributes, and scoring) relevant to ED note quality. Using these data, we derived a draft tool which was shared with the experts in round two, and then modified based on their feedback. Outcome data included survey response rates, and quantitative and qualitative feedback.
Results: Response rates were 44% (n = 17) and 47% (n = 8) for the first and second rounds. Key perspectives from round one emphasized differences between broadly applicable ("universal") versus context-specific ("conditional") elements, the importance of certain attributes, and a binary scoring system. The authors drew on perspectives to develop a tool with eight universal and 16 conditional elements, four attributes, scored using a binary system. Feedback from the second round recommended minor changes, but demonstrated consensus on the tool's stated function.
Conclusion: We developed the Tool for ED Note Quality. Limitations include a small sample size and a focus on physician perspectives. Next steps include generation of evidence for validity and refinement of the scoring system. Once validated, the tool may be used in assessing ED note quality for the purposes of medical education, quality improvement, and digital health research.