常用方法差异对学生评价教学工具的威胁

John Garger, Paul H. Jacques, Brian W. Gastle, C. Connolly
{"title":"常用方法差异对学生评价教学工具的威胁","authors":"John Garger, Paul H. Jacques, Brian W. Gastle, C. Connolly","doi":"10.1108/HEED-05-2018-0012","DOIUrl":null,"url":null,"abstract":"\nPurpose\nThe purpose of this paper is to demonstrate that common method variance, specifically single-source bias, threatens the validity of a university-created student assessment of instructor instrument, suggesting that decisions made from these assessments are inherently flawed or skewed. Single-source bias leads to generalizations about assessments that might influence the ability of raters to separate multiple behaviors of an instructor.\n\n\nDesign/methodology/approach\nExploratory factor analysis, nested confirmatory factor analysis and within-and-between analysis are used to assess a university-developed, proprietary student assessment of instructor instrument to determine whether a hypothesized factor structure is identifiable. The instrument was developed over a three-year period by a university-mandated committee.\n\n\nFindings\nFindings suggest that common method variance, specifically single-source bias, resulted in the inability to identify hypothesized constructs statistically. Additional information is needed to identify valid instruments and an effective collection method for assessment.\n\n\nPractical implications\nInstitutions are not guaranteed valid or useful instruments even if they invest significant time and resources to produce one. Without accurate instrumentation, there is insufficient information to assess constructs for teaching excellence. More valid measurement criteria can result from using multiple methods, altering collection times and educating students to distinguish multiple traits and behaviors of individual instructors more accurately.\n\n\nOriginality/value\nThis paper documents the three-year development of a university-wide student assessment of instructor instrument and carries development through to examining the psychometric properties and appropriateness of using this instrument to evaluate instructors.\n","PeriodicalId":32842,"journal":{"name":"Higher Education Evaluation and Development","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1108/HEED-05-2018-0012","citationCount":"5","resultStr":"{\"title\":\"Threats of common method variance in student assessment of instruction instruments\",\"authors\":\"John Garger, Paul H. Jacques, Brian W. Gastle, C. Connolly\",\"doi\":\"10.1108/HEED-05-2018-0012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nThe purpose of this paper is to demonstrate that common method variance, specifically single-source bias, threatens the validity of a university-created student assessment of instructor instrument, suggesting that decisions made from these assessments are inherently flawed or skewed. Single-source bias leads to generalizations about assessments that might influence the ability of raters to separate multiple behaviors of an instructor.\\n\\n\\nDesign/methodology/approach\\nExploratory factor analysis, nested confirmatory factor analysis and within-and-between analysis are used to assess a university-developed, proprietary student assessment of instructor instrument to determine whether a hypothesized factor structure is identifiable. The instrument was developed over a three-year period by a university-mandated committee.\\n\\n\\nFindings\\nFindings suggest that common method variance, specifically single-source bias, resulted in the inability to identify hypothesized constructs statistically. Additional information is needed to identify valid instruments and an effective collection method for assessment.\\n\\n\\nPractical implications\\nInstitutions are not guaranteed valid or useful instruments even if they invest significant time and resources to produce one. Without accurate instrumentation, there is insufficient information to assess constructs for teaching excellence. More valid measurement criteria can result from using multiple methods, altering collection times and educating students to distinguish multiple traits and behaviors of individual instructors more accurately.\\n\\n\\nOriginality/value\\nThis paper documents the three-year development of a university-wide student assessment of instructor instrument and carries development through to examining the psychometric properties and appropriateness of using this instrument to evaluate instructors.\\n\",\"PeriodicalId\":32842,\"journal\":{\"name\":\"Higher Education Evaluation and Development\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1108/HEED-05-2018-0012\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Higher Education Evaluation and Development\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/HEED-05-2018-0012\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Higher Education Evaluation and Development","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/HEED-05-2018-0012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

本文的目的是证明,常见的方法差异,特别是单一来源的偏差,威胁着大学创建的学生对教师工具的评估的有效性,这表明从这些评估中做出的决定本质上是有缺陷或有偏差的。单一来源的偏见导致对评估的概括,这可能会影响评分者区分教师多种行为的能力。设计/方法/方法探索性因素分析、嵌套性因素分析和内外分析用于评估大学开发的、专有的学生对教师工具的评估,以确定假设的因素结构是否可识别。该工具是由一个大学授权的委员会历时三年开发的。研究结果表明,常见的方法差异,特别是单源偏差,导致无法在统计上识别假设的结构。需要更多的信息来确定有效的工具和有效的评估收集方法。实际意义即使机构投入了大量的时间和资源来生产一种有效或有用的工具,它们也不能保证有效或有用。没有准确的仪器,就没有足够的信息来评估教学卓越的结构。通过使用多种方法、改变收集时间和教育学生更准确地区分个别教师的多种特征和行为,可以产生更有效的测量标准。原创性/价值本文记录了大学范围内学生教师评估工具的三年发展,并将发展贯穿于检验心理测量特性和使用该工具评估教师的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Threats of common method variance in student assessment of instruction instruments
Purpose The purpose of this paper is to demonstrate that common method variance, specifically single-source bias, threatens the validity of a university-created student assessment of instructor instrument, suggesting that decisions made from these assessments are inherently flawed or skewed. Single-source bias leads to generalizations about assessments that might influence the ability of raters to separate multiple behaviors of an instructor. Design/methodology/approach Exploratory factor analysis, nested confirmatory factor analysis and within-and-between analysis are used to assess a university-developed, proprietary student assessment of instructor instrument to determine whether a hypothesized factor structure is identifiable. The instrument was developed over a three-year period by a university-mandated committee. Findings Findings suggest that common method variance, specifically single-source bias, resulted in the inability to identify hypothesized constructs statistically. Additional information is needed to identify valid instruments and an effective collection method for assessment. Practical implications Institutions are not guaranteed valid or useful instruments even if they invest significant time and resources to produce one. Without accurate instrumentation, there is insufficient information to assess constructs for teaching excellence. More valid measurement criteria can result from using multiple methods, altering collection times and educating students to distinguish multiple traits and behaviors of individual instructors more accurately. Originality/value This paper documents the three-year development of a university-wide student assessment of instructor instrument and carries development through to examining the psychometric properties and appropriateness of using this instrument to evaluate instructors.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
8
审稿时长
15 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信