Marta M Maslej, Kayle Donner, Anupam Thakur, Faisal Islam, Kenya A Costa-Dookhan, Sanjeev Sockalingam
{"title":"Deriving Insights From Open-Ended Learner Feedback: An Exploration of Natural Language Processing Approaches.","authors":"Marta M Maslej, Kayle Donner, Anupam Thakur, Faisal Islam, Kenya A Costa-Dookhan, Sanjeev Sockalingam","doi":"10.1097/CEH.0000000000000597","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Open-ended feedback from learners offers valuable insights for adapting continuing health education to their needs; however, this feedback is burdensome to analyze with qualitative methods. Natural language processing offers a potential solution, but it is unclear which methods provide useful insights. We evaluated natural language processing methods for analyzing open-ended feedback from continuing professional development training at a psychiatric hospital.</p><p><strong>Methods: </strong>The data set consisted of survey responses from staff participants, which included two text responses on how participants intended to use the training (\"intent to use\"; n = 480) and other information they wished to share (\"open-ended feedback\"; n = 291). We analyzed \"intent-to-use\" responses with topic modeling, \"open-ended feedback\" with sentiment analysis, and both responses with large language model (LLM)-based clustering. We examined outputs of each approach to determine their value for deriving insights about the training.</p><p><strong>Results: </strong>Our results indicated that because the \"intent-to-use\" responses were short and lacked diversity, topic modeling was not useful in differentiating content between the topics. For \"open-ended feedback,\" sentiment scores did not accurately reflect the valence of responses. The LLM-based clustering approach generated meaningful clusters characterized by semantically similar words for both responses.</p><p><strong>Discussion: </strong>LLMs may be a useful approach for deriving insights from learner feedback because they capture context, making them capable of distinguishing between responses that use similar words to convey different topics. Future directions involve exploring other methods involving LLMs, or examining how these methods fare on other data sets or types of learner feedback.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Continuing Education in the Health Professions","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1097/CEH.0000000000000597","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Open-ended feedback from learners offers valuable insights for adapting continuing health education to their needs; however, this feedback is burdensome to analyze with qualitative methods. Natural language processing offers a potential solution, but it is unclear which methods provide useful insights. We evaluated natural language processing methods for analyzing open-ended feedback from continuing professional development training at a psychiatric hospital.
Methods: The data set consisted of survey responses from staff participants, which included two text responses on how participants intended to use the training ("intent to use"; n = 480) and other information they wished to share ("open-ended feedback"; n = 291). We analyzed "intent-to-use" responses with topic modeling, "open-ended feedback" with sentiment analysis, and both responses with large language model (LLM)-based clustering. We examined outputs of each approach to determine their value for deriving insights about the training.
Results: Our results indicated that because the "intent-to-use" responses were short and lacked diversity, topic modeling was not useful in differentiating content between the topics. For "open-ended feedback," sentiment scores did not accurately reflect the valence of responses. The LLM-based clustering approach generated meaningful clusters characterized by semantically similar words for both responses.
Discussion: LLMs may be a useful approach for deriving insights from learner feedback because they capture context, making them capable of distinguishing between responses that use similar words to convey different topics. Future directions involve exploring other methods involving LLMs, or examining how these methods fare on other data sets or types of learner feedback.
期刊介绍:
The Journal of Continuing Education is a quarterly journal publishing articles relevant to theory, practice, and policy development for continuing education in the health sciences. The journal presents original research and essays on subjects involving the lifelong learning of professionals, with a focus on continuous quality improvement, competency assessment, and knowledge translation. It provides thoughtful advice to those who develop, conduct, and evaluate continuing education programs.