Improving course evaluation processes in higher education institutions: a modular system approach.

IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
PeerJ Computer Science Pub Date : 2025-08-28 eCollection Date: 2025-01-01 DOI:10.7717/peerj-cs.3110
İlker Kocaoğlu, Erinç Karataş
{"title":"Improving course evaluation processes in higher education institutions: a modular system approach.","authors":"İlker Kocaoğlu, Erinç Karataş","doi":"10.7717/peerj-cs.3110","DOIUrl":null,"url":null,"abstract":"<p><p>Course and instructor evaluations (CIE) are essential tools for assessing educational quality in higher education. However, traditional CIE systems often suffer from inconsistencies between structured responses and open-ended feedback, leading to unreliable insights and increased administrative workload. This study suggests a modular system to address these challenges, leveraging sentiment analysis and inconsistency detection to enhance the reliability and efficiency of CIE processes.</p><p><strong>Background: </strong>Improving the reliability of CIE data is crucial for informed decision-making in higher education. Existing methods fail to address discrepancies between numerical scores and textual feedback, resulting in misleading evaluations. This study proposes a system to identify and exclude inconsistent data, providing more reliable insights.</p><p><strong>Methods: </strong>Using the Design Science Research Methodology (DSRM), a system architecture was developed with five modules: data collection, preprocessing, sentiment analysis, inconsistency detection, and reporting. A dataset of 13,651 anonymized Turkish CIE records was used to train and evaluate machine learning algorithms, including support vector machines, naive Bayes, random forest, decision trees, K-nearest neighbors, and OpenAI's GPT-4 Turbo Preview model. Sentiment analysis results from open-ended responses were compared with structured responses to identify inconsistencies.</p><p><strong>Results: </strong>The GPT-4 Turbo Preview model outperformed traditional algorithms, achieving 85% accuracy, 88% precision, and 95% recall. Analysis of a prototype system applied to 431 CIEs identified a 37% inconsistency rate. By excluding inconsistent data, the system generated reliable reports with actionable insights for course and instructor performance. The purpose of this study is to design and evaluate a new system using the Design Science Research (DSR) approach to enhance the accuracy and reliability of course evaluation processes employed in higher education institutions. The modular system effectively addresses inconsistencies in CIE processes, offering a scalable and adaptable solution for higher education institutions. By integrating advanced machine learning techniques, the system enhances the accuracy and reliability of evaluation reports, supporting data-driven decision-making. Future work will focus on refining sentiment analysis for neutral comments and broadening the system's applicability to diverse educational contexts. This innovative approach represents a significant advancement in leveraging technology to improve educational quality.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3110"},"PeriodicalIF":2.5000,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453702/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.3110","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Course and instructor evaluations (CIE) are essential tools for assessing educational quality in higher education. However, traditional CIE systems often suffer from inconsistencies between structured responses and open-ended feedback, leading to unreliable insights and increased administrative workload. This study suggests a modular system to address these challenges, leveraging sentiment analysis and inconsistency detection to enhance the reliability and efficiency of CIE processes.

Background: Improving the reliability of CIE data is crucial for informed decision-making in higher education. Existing methods fail to address discrepancies between numerical scores and textual feedback, resulting in misleading evaluations. This study proposes a system to identify and exclude inconsistent data, providing more reliable insights.

Methods: Using the Design Science Research Methodology (DSRM), a system architecture was developed with five modules: data collection, preprocessing, sentiment analysis, inconsistency detection, and reporting. A dataset of 13,651 anonymized Turkish CIE records was used to train and evaluate machine learning algorithms, including support vector machines, naive Bayes, random forest, decision trees, K-nearest neighbors, and OpenAI's GPT-4 Turbo Preview model. Sentiment analysis results from open-ended responses were compared with structured responses to identify inconsistencies.

Results: The GPT-4 Turbo Preview model outperformed traditional algorithms, achieving 85% accuracy, 88% precision, and 95% recall. Analysis of a prototype system applied to 431 CIEs identified a 37% inconsistency rate. By excluding inconsistent data, the system generated reliable reports with actionable insights for course and instructor performance. The purpose of this study is to design and evaluate a new system using the Design Science Research (DSR) approach to enhance the accuracy and reliability of course evaluation processes employed in higher education institutions. The modular system effectively addresses inconsistencies in CIE processes, offering a scalable and adaptable solution for higher education institutions. By integrating advanced machine learning techniques, the system enhances the accuracy and reliability of evaluation reports, supporting data-driven decision-making. Future work will focus on refining sentiment analysis for neutral comments and broadening the system's applicability to diverse educational contexts. This innovative approach represents a significant advancement in leveraging technology to improve educational quality.

Abstract Image

Abstract Image

Abstract Image

改进高等院校课程评价过程:模块化系统方法。
课程与教师评价(CIE)是评估高等教育教学质量的重要工具。然而,传统的CIE系统经常遭受结构化响应和开放式反馈之间的不一致,导致不可靠的见解和增加的管理工作量。本研究提出了一个模块化系统来解决这些挑战,利用情感分析和不一致检测来提高CIE流程的可靠性和效率。背景:提高CIE数据的可靠性对高等教育中的明智决策至关重要。现有的方法不能解决数字分数和文本反馈之间的差异,导致误导性的评价。本研究提出了一个系统来识别和排除不一致的数据,提供更可靠的见解。方法:采用设计科学研究方法(DSRM),构建了包含数据收集、预处理、情感分析、不一致检测和报告五个模块的系统架构。使用13651个匿名土耳其CIE记录的数据集来训练和评估机器学习算法,包括支持向量机、朴素贝叶斯、随机森林、决策树、k近邻和OpenAI的GPT-4 Turbo Preview模型。将开放式回答的情绪分析结果与结构化回答进行比较,以确定不一致之处。结果:GPT-4 Turbo Preview模型优于传统算法,准确率达到85%,精密度达到88%,召回率达到95%。对应用于431个CIEs的原型系统的分析确定了37%的不一致性率。通过排除不一致的数据,系统生成了可靠的报告,并对课程和教师的表现有可操作的见解。摘要本研究旨在运用设计科学研究(DSR)的方法,设计并评估一个新的课程评估系统,以提高高等教育机构课程评估过程的准确性和可靠性。模块化系统有效地解决了CIE流程中的不一致性,为高等教育机构提供了可扩展和适应性强的解决方案。通过集成先进的机器学习技术,该系统提高了评估报告的准确性和可靠性,支持数据驱动的决策。未来的工作将集中于改进中立评论的情感分析,并扩大系统对不同教育背景的适用性。这种创新的方法代表了利用技术提高教育质量的重大进步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
PeerJ Computer Science
PeerJ Computer Science Computer Science-General Computer Science
CiteScore
6.10
自引率
5.30%
发文量
332
审稿时长
10 weeks
期刊介绍: PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信