{"title":"项目评估实践与STEM博士生培养","authors":"Philip M. Reeves, J. Claydon, Glen Davenport","doi":"10.1108/sgpe-04-2021-0029","DOIUrl":null,"url":null,"abstract":"\nPurpose\nProgram evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to growing calls for aligning training models to economic realities. This paper aims to present the current state of evaluation in research-based doctoral programs in STEM fields.\n\n\nDesign/methodology/approach\nTo highlight the recent evaluative processes, the authors restricted the initial literature search to papers published in English between 2008 and 2019. As the authors were motivated by the shift at NIH, this review focuses on STEM programs, though papers on broader evaluation efforts were included as long as STEM-specific results could be identified. In total, 137 papers were included in the final review.\n\n\nFindings\nOnly nine papers presented an evaluation of a full program. Instead, papers focused on evaluating individual components of a graduate program, testing small interventions or examining existing national data sets. The review did not find any documents that focused on the continual monitoring of training quality.\n\n\nOriginality/value\nThis review can serve as a resource, encourage transparency and provide motivation for faculty and administrators to gather and use assessment data to improve training models. By understanding how existing evaluations are conducted and implemented, administrators can apply evidence-based methodologies to ensure the highest quality training to best prepare students.\n","PeriodicalId":42038,"journal":{"name":"Studies in Graduate and Postdoctoral Education","volume":" ","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Program evaluation practices and the training of PhD students in STEM\",\"authors\":\"Philip M. Reeves, J. Claydon, Glen Davenport\",\"doi\":\"10.1108/sgpe-04-2021-0029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nProgram evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to growing calls for aligning training models to economic realities. This paper aims to present the current state of evaluation in research-based doctoral programs in STEM fields.\\n\\n\\nDesign/methodology/approach\\nTo highlight the recent evaluative processes, the authors restricted the initial literature search to papers published in English between 2008 and 2019. As the authors were motivated by the shift at NIH, this review focuses on STEM programs, though papers on broader evaluation efforts were included as long as STEM-specific results could be identified. In total, 137 papers were included in the final review.\\n\\n\\nFindings\\nOnly nine papers presented an evaluation of a full program. Instead, papers focused on evaluating individual components of a graduate program, testing small interventions or examining existing national data sets. The review did not find any documents that focused on the continual monitoring of training quality.\\n\\n\\nOriginality/value\\nThis review can serve as a resource, encourage transparency and provide motivation for faculty and administrators to gather and use assessment data to improve training models. By understanding how existing evaluations are conducted and implemented, administrators can apply evidence-based methodologies to ensure the highest quality training to best prepare students.\\n\",\"PeriodicalId\":42038,\"journal\":{\"name\":\"Studies in Graduate and Postdoctoral Education\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2021-11-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in Graduate and Postdoctoral Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/sgpe-04-2021-0029\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Graduate and Postdoctoral Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/sgpe-04-2021-0029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Program evaluation practices and the training of PhD students in STEM
Purpose
Program evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to growing calls for aligning training models to economic realities. This paper aims to present the current state of evaluation in research-based doctoral programs in STEM fields.
Design/methodology/approach
To highlight the recent evaluative processes, the authors restricted the initial literature search to papers published in English between 2008 and 2019. As the authors were motivated by the shift at NIH, this review focuses on STEM programs, though papers on broader evaluation efforts were included as long as STEM-specific results could be identified. In total, 137 papers were included in the final review.
Findings
Only nine papers presented an evaluation of a full program. Instead, papers focused on evaluating individual components of a graduate program, testing small interventions or examining existing national data sets. The review did not find any documents that focused on the continual monitoring of training quality.
Originality/value
This review can serve as a resource, encourage transparency and provide motivation for faculty and administrators to gather and use assessment data to improve training models. By understanding how existing evaluations are conducted and implemented, administrators can apply evidence-based methodologies to ensure the highest quality training to best prepare students.