Program evaluation practices and the training of PhD students in STEM

IF 1.8 Q2 EDUCATION & EDUCATIONAL RESEARCH
Philip M. Reeves, J. Claydon, Glen Davenport
{"title":"Program evaluation practices and the training of PhD students in STEM","authors":"Philip M. Reeves, J. Claydon, Glen Davenport","doi":"10.1108/sgpe-04-2021-0029","DOIUrl":null,"url":null,"abstract":"\nPurpose\nProgram evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to growing calls for aligning training models to economic realities. This paper aims to present the current state of evaluation in research-based doctoral programs in STEM fields.\n\n\nDesign/methodology/approach\nTo highlight the recent evaluative processes, the authors restricted the initial literature search to papers published in English between 2008 and 2019. As the authors were motivated by the shift at NIH, this review focuses on STEM programs, though papers on broader evaluation efforts were included as long as STEM-specific results could be identified. In total, 137 papers were included in the final review.\n\n\nFindings\nOnly nine papers presented an evaluation of a full program. Instead, papers focused on evaluating individual components of a graduate program, testing small interventions or examining existing national data sets. The review did not find any documents that focused on the continual monitoring of training quality.\n\n\nOriginality/value\nThis review can serve as a resource, encourage transparency and provide motivation for faculty and administrators to gather and use assessment data to improve training models. By understanding how existing evaluations are conducted and implemented, administrators can apply evidence-based methodologies to ensure the highest quality training to best prepare students.\n","PeriodicalId":42038,"journal":{"name":"Studies in Graduate and Postdoctoral Education","volume":" ","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Graduate and Postdoctoral Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/sgpe-04-2021-0029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose Program evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to growing calls for aligning training models to economic realities. This paper aims to present the current state of evaluation in research-based doctoral programs in STEM fields. Design/methodology/approach To highlight the recent evaluative processes, the authors restricted the initial literature search to papers published in English between 2008 and 2019. As the authors were motivated by the shift at NIH, this review focuses on STEM programs, though papers on broader evaluation efforts were included as long as STEM-specific results could be identified. In total, 137 papers were included in the final review. Findings Only nine papers presented an evaluation of a full program. Instead, papers focused on evaluating individual components of a graduate program, testing small interventions or examining existing national data sets. The review did not find any documents that focused on the continual monitoring of training quality. Originality/value This review can serve as a resource, encourage transparency and provide motivation for faculty and administrators to gather and use assessment data to improve training models. By understanding how existing evaluations are conducted and implemented, administrators can apply evidence-based methodologies to ensure the highest quality training to best prepare students.
项目评估实践与STEM博士生培养
项目评估是一个基于证据的过程,它将允许机构记录和提高研究生项目的质量,并决定如何应对日益增长的要求,使培训模式与经济现实相一致。本文旨在介绍STEM领域研究型博士课程评估的现状。为了突出最近的评估过程,作者将最初的文献检索限制在2008年至2019年期间发表的英文论文中。由于作者受到NIH转变的激励,本综述侧重于STEM项目,尽管只要能够确定STEM特定的结果,就包括了更广泛的评估工作的论文。最终评审共纳入137篇论文。研究结果:只有9篇论文对整个项目进行了评估。相反,论文的重点是评估研究生项目的各个组成部分,测试小型干预措施或检查现有的国家数据集。审查未发现任何集中于持续监测培训质量的文件。原创性/价值这种审查可以作为一种资源,鼓励透明度,并为教师和管理人员收集和使用评估数据来改进培训模式提供动力。通过了解现有的评估是如何进行和实施的,管理人员可以应用基于证据的方法来确保最高质量的培训,为学生做好最好的准备。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Studies in Graduate and Postdoctoral Education
Studies in Graduate and Postdoctoral Education EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
2.90
自引率
9.10%
发文量
17
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信