Janae K Heath, Caitlin B Clancy, William Pluta, Gary E Weissman, Ursula Anderson, Jennifer R Kogan, C Jessica Dine, Judy A Shea
{"title":"学习者对主治医师评价的自然语言处理以识别专业失误。","authors":"Janae K Heath, Caitlin B Clancy, William Pluta, Gary E Weissman, Ursula Anderson, Jennifer R Kogan, C Jessica Dine, Judy A Shea","doi":"10.1177/01632787231158128","DOIUrl":null,"url":null,"abstract":"<p><p>Unprofessional faculty behaviors negatively impact the well-being of trainees yet are infrequently reported through established reporting systems. Manual review of narrative faculty evaluations provides an additional avenue for identifying unprofessional behavior but is time- and resource-intensive, and therefore of limited value for identifying and remediating faculty with professionalism concerns. Natural language processing (NLP) techniques may provide a mechanism for streamlining manual review processes to identify faculty professionalism lapses. In this retrospective cohort study of 15,432 narrative evaluations of medical faculty by medical trainees, we identified professionalism lapses using automated analysis of the text of faculty evaluations. We used multiple NLP approaches to develop and validate several classification models, which were evaluated primarily based on the positive predictive value (PPV) and secondarily by their calibration. A NLP-model using sentiment analysis (quantifying subjectivity of the text) in combination with key words (using the ensemble technique) had the best performance overall with a PPV of 49% (CI 38%-59%). These findings highlight how NLP can be used to screen narrative evaluations of faculty to identify unprofessional faculty behaviors. Incorporation of NLP into faculty review workflows enables a more focused manual review of comments, providing a supplemental mechanism to identify faculty professionalism lapses.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 3","pages":"225-232"},"PeriodicalIF":2.2000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/fd/77/10.1177_01632787231158128.PMC10443919.pdf","citationCount":"0","resultStr":"{\"title\":\"Natural Language Processing of Learners' Evaluations of Attendings to Identify Professionalism Lapses.\",\"authors\":\"Janae K Heath, Caitlin B Clancy, William Pluta, Gary E Weissman, Ursula Anderson, Jennifer R Kogan, C Jessica Dine, Judy A Shea\",\"doi\":\"10.1177/01632787231158128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Unprofessional faculty behaviors negatively impact the well-being of trainees yet are infrequently reported through established reporting systems. Manual review of narrative faculty evaluations provides an additional avenue for identifying unprofessional behavior but is time- and resource-intensive, and therefore of limited value for identifying and remediating faculty with professionalism concerns. Natural language processing (NLP) techniques may provide a mechanism for streamlining manual review processes to identify faculty professionalism lapses. In this retrospective cohort study of 15,432 narrative evaluations of medical faculty by medical trainees, we identified professionalism lapses using automated analysis of the text of faculty evaluations. We used multiple NLP approaches to develop and validate several classification models, which were evaluated primarily based on the positive predictive value (PPV) and secondarily by their calibration. A NLP-model using sentiment analysis (quantifying subjectivity of the text) in combination with key words (using the ensemble technique) had the best performance overall with a PPV of 49% (CI 38%-59%). These findings highlight how NLP can be used to screen narrative evaluations of faculty to identify unprofessional faculty behaviors. Incorporation of NLP into faculty review workflows enables a more focused manual review of comments, providing a supplemental mechanism to identify faculty professionalism lapses.</p>\",\"PeriodicalId\":12315,\"journal\":{\"name\":\"Evaluation & the Health Professions\",\"volume\":\"46 3\",\"pages\":\"225-232\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/fd/77/10.1177_01632787231158128.PMC10443919.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evaluation & the Health Professions\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/01632787231158128\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evaluation & the Health Professions","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/01632787231158128","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Natural Language Processing of Learners' Evaluations of Attendings to Identify Professionalism Lapses.
Unprofessional faculty behaviors negatively impact the well-being of trainees yet are infrequently reported through established reporting systems. Manual review of narrative faculty evaluations provides an additional avenue for identifying unprofessional behavior but is time- and resource-intensive, and therefore of limited value for identifying and remediating faculty with professionalism concerns. Natural language processing (NLP) techniques may provide a mechanism for streamlining manual review processes to identify faculty professionalism lapses. In this retrospective cohort study of 15,432 narrative evaluations of medical faculty by medical trainees, we identified professionalism lapses using automated analysis of the text of faculty evaluations. We used multiple NLP approaches to develop and validate several classification models, which were evaluated primarily based on the positive predictive value (PPV) and secondarily by their calibration. A NLP-model using sentiment analysis (quantifying subjectivity of the text) in combination with key words (using the ensemble technique) had the best performance overall with a PPV of 49% (CI 38%-59%). These findings highlight how NLP can be used to screen narrative evaluations of faculty to identify unprofessional faculty behaviors. Incorporation of NLP into faculty review workflows enables a more focused manual review of comments, providing a supplemental mechanism to identify faculty professionalism lapses.
期刊介绍:
Evaluation & the Health Professions is a peer-reviewed, quarterly journal that provides health-related professionals with state-of-the-art methodological, measurement, and statistical tools for conceptualizing the etiology of health promotion and problems, and developing, implementing, and evaluating health programs, teaching and training services, and products that pertain to a myriad of health dimensions. This journal is a member of the Committee on Publication Ethics (COPE). Average time from submission to first decision: 31 days