Practical Assessment, Research and Evaluation最新文献

筛选
英文 中文
A State Level Analysis of the Marzano Teacher Evaluation Model: Predicting Teacher Value-Added Measures with Observation Scores. Marzano教师评价模型的国家层面分析:用观察分数预测教师增值措施。
Practical Assessment, Research and Evaluation Pub Date : 2019-07-01 DOI: 10.7275/CC5B-6J43
Lindsey Devers Basileo, Michael Toth
{"title":"A State Level Analysis of the Marzano Teacher Evaluation Model: Predicting Teacher Value-Added Measures with Observation Scores.","authors":"Lindsey Devers Basileo, Michael Toth","doi":"10.7275/CC5B-6J43","DOIUrl":"https://doi.org/10.7275/CC5B-6J43","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84624271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Generalizability Theory in R R中的泛化理论
Practical Assessment, Research and Evaluation Pub Date : 2019-07-01 DOI: 10.7275/5065-GC10
Alan Huebner, Marissa Lucht
{"title":"Generalizability Theory in R","authors":"Alan Huebner, Marissa Lucht","doi":"10.7275/5065-GC10","DOIUrl":"https://doi.org/10.7275/5065-GC10","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74436594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Overview and Illustration of Bayesian Confirmatory Factor Analysis with Ordinal Indicators 序数指标贝叶斯验证性因子分析综述与说明
Practical Assessment, Research and Evaluation Pub Date : 2019-05-01 DOI: 10.7275/VK6G-0075
John M Taylor
{"title":"Overview and Illustration of Bayesian Confirmatory Factor Analysis with Ordinal Indicators","authors":"John M Taylor","doi":"10.7275/VK6G-0075","DOIUrl":"https://doi.org/10.7275/VK6G-0075","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77457190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Causal Inference Methods for Selection on Observed and Unobserved Factors: Propensity Score Matching, Heckit Models, and Instrumental Variable Estimation. 选择观察和未观察因素的因果推理方法:倾向得分匹配,Heckit模型和工具变量估计。
Practical Assessment, Research and Evaluation Pub Date : 2019-04-01 DOI: 10.7275/7tgr-xt91
P. Scott
{"title":"Causal Inference Methods for Selection on Observed and Unobserved Factors: Propensity Score Matching, Heckit Models, and Instrumental Variable Estimation.","authors":"P. Scott","doi":"10.7275/7tgr-xt91","DOIUrl":"https://doi.org/10.7275/7tgr-xt91","url":null,"abstract":"Two approaches to causal inference in the presence of non-random assignment are presented: The Propensity Score approach which pseudo-randomizes by balancing groups on observed propensity to be in treatment, and the Endogenous Treatment Effects approach which utilizes systems of equations to explicitly model selection into treatment. The three methods based on these approaches that are compared in this study are Heckit models, Propensity Score Matching, and Instrumental Variable models. A simulation is presented to demonstrate these models under different specifications of selection observables, selection unobservables, and outcome unobservables in terms of bias in average treatment effect estimates and size of standard errors. Results show that in most cases Heckit models produce the least bias and highest standard errors in average treatment effect estimates. Propensity Score Matching produces the least bias when selection observables are mildly correlated with selection unobservables and outcome unobservables with outcome and selection unobservables being uncorrelated. Instrumental Variable Estimation produces the least bias in two cases: (1) when selection unobservables are correlated with both selection observables and outcome unobservables, while selection observables are unrelated to outcome unobservables; (2) when there are no relations between selection observables, selection unobservables, and outcome unobservables.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72534936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Determining Item Screening Criteria Using Cost-Benefit Analysis. 使用成本效益分析确定项目筛选标准。
Practical Assessment, Research and Evaluation Pub Date : 2019-04-01 DOI: 10.7275/XSQM-8839
Bozhidar M. Bashkov, Jerome C. Clauser
{"title":"Determining Item Screening Criteria Using Cost-Benefit Analysis.","authors":"Bozhidar M. Bashkov, Jerome C. Clauser","doi":"10.7275/XSQM-8839","DOIUrl":"https://doi.org/10.7275/XSQM-8839","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85263060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Plot for the Visualization of Missing Value Patterns in Multivariate Data 多变量数据中缺失值模式可视化的绘图
Practical Assessment, Research and Evaluation Pub Date : 2019-01-01 DOI: 10.7275/94RA-1Y55
P. Valero-Mora, María F. Rodrigo, M. Sanchez, J. Sanmartín
{"title":"A Plot for the Visualization of Missing Value Patterns in Multivariate Data","authors":"P. Valero-Mora, María F. Rodrigo, M. Sanchez, J. Sanmartín","doi":"10.7275/94RA-1Y55","DOIUrl":"https://doi.org/10.7275/94RA-1Y55","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77754664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Rater Cognition to Improve Generalizability of an Assessment of Scientific Argumentation 利用评价认知提高科学论证评价的概括性
Practical Assessment, Research and Evaluation Pub Date : 2019-01-01 DOI: 10.7275/EY9D-P954
Katrina Borowiec, Courtney Castle
{"title":"Using Rater Cognition to Improve Generalizability of an Assessment of Scientific Argumentation","authors":"Katrina Borowiec, Courtney Castle","doi":"10.7275/EY9D-P954","DOIUrl":"https://doi.org/10.7275/EY9D-P954","url":null,"abstract":"Rater cognition or “think-aloud” studies have historically been used to enhance rater accuracy and consistency in writing and language assessments. As assessments are developed for new, complex constructs from the Next Generation Science Standards (NGSS) , the present study illustrates the utility of extending “think-aloud” studies to science assessment. The study focuses on the development of rubrics for scientific argumentation, one of the NGSS Science and Engineering practices. The initial rubrics were modified based on cognitive interviews with five raters. Next, a group of four new raters scored responses using the original and revised rubrics. A psychometric analysis was conducted to measure change in interrater reliability, accuracy, and generalizability (using a generalizability study or “g-study”) for the original and revised rubrics. Interrater reliability, accuracy, and generalizability increased with the rubric modifications. Furthermore, follow-up interviews with the second group of raters indicated that most raters preferred the revised rubric. These findings illustrate that cognitive interviews with raters can be used to enhance rubric usability and generalizability when assessing scientific argumentation, thereby improving assessment validity.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78995629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Addressing the shortcomings of traditional multiple-choice tests: subset selection without mark deductions 解决传统选择题的缺点:不扣分的子集选择
Practical Assessment, Research and Evaluation Pub Date : 2018-12-21 DOI: 10.7275/HQ8A-F262
Lucia Otoyo, M. Bush
{"title":"Addressing the shortcomings of traditional multiple-choice tests: subset selection without mark deductions","authors":"Lucia Otoyo, M. Bush","doi":"10.7275/HQ8A-F262","DOIUrl":"https://doi.org/10.7275/HQ8A-F262","url":null,"abstract":"This article presents the results of an empirical study of “subset selection” tests, which are a generalisation of traditional multiple-choice tests in which test takers are able to express partial knowledge. Similar previous studies have mostly been supportive of subset selection, but the deduction of marks for incorrect responses has been a cause for concern. For the present study, a novel marking scheme based on Akeroyd’s “dual response system” was used instead. In Akeroyd’s system, which assumes that every question has four answer options, test takers are able to split their single 100% bet on one answer option into two 50% bets by selecting two options, or into four 25% bets by selecting no options. To achieve full subset selection, this idea was extended so that test takers could also split their 100% bet equally between three options. \u0000 \u0000The results indicate increased test reliability (in the sense of measurement consistency), and also increased satisfaction on the part of the test takers. Furthermore, since the novel marking scheme does not in principle lead to either inflated or deflated marks, this makes it easy for educators who currently use traditional multiple-choice tests to switch to using subset selection tests.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83108558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Ensemble and Model Comparison Approaches for Big Data Analytics in Social Sciences. 社会科学大数据分析的集成与模型比较方法
Practical Assessment, Research and Evaluation Pub Date : 2018-11-01 DOI: 10.7275/CHAW-Y360
Chong Ho Alex Yu, Hyun Seo Lee, Emily Lara, Siyan Gan
{"title":"The Ensemble and Model Comparison Approaches for Big Data Analytics in Social Sciences.","authors":"Chong Ho Alex Yu, Hyun Seo Lee, Emily Lara, Siyan Gan","doi":"10.7275/CHAW-Y360","DOIUrl":"https://doi.org/10.7275/CHAW-Y360","url":null,"abstract":"Big data analytics are prevalent in fields like business, engineering, public health, and the physical sciences, but social scientists are slower than their peers in other fields in adopting this new methodology. One major reason for this is that traditional statistical procedures are typically not suitable for the analysis of large and complex data sets. Although data mining techniques could alleviate this problem, it is often unclear to social science researchers which option is the most suitable one to a particular research problem. The main objective of this paper is to illustrate how the model comparison of two popular ensemble methods, namely, boosting and bagging, could yield an improved explanatory model.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90998612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Effective Rubric Norming Process. 一个有效的规则规范过程。
Practical Assessment, Research and Evaluation Pub Date : 2018-09-01 DOI: 10.7275/ERF8-CA22
K. Schoepp, M. Danaher, A. A. Kranov
{"title":"An Effective Rubric Norming Process.","authors":"K. Schoepp, M. Danaher, A. A. Kranov","doi":"10.7275/ERF8-CA22","DOIUrl":"https://doi.org/10.7275/ERF8-CA22","url":null,"abstract":"Within higher education, rubric use is expanding. Whereas some years ago the topic of rubrics may have been of interest only to faculty in colleges of education, in recent years the focus on teaching and learning and the emphasis from accrediting bodies has elevated the importance of rubrics across disciplines and different types of assessment. One of the key aspects to successful implementation of a shared rubric is the process known as norming, calibrating, or moderating rubrics, an oft-neglected area in rubric literature. Norming should be a collaborative process built around knowledge of the rubric and meaningful discussion leading to evidence-driven consensus, but actual examples of norming are rarely available to university faculty. This paper describes the steps involved in a successful consensus-driven norming process in higher education using one particular rubric, the Computing Professional Skills Assessment (CPSA). The steps are: 1) document preparation; 2) rubric review; 3) initial reading and scoring of one learning outcome; 4) initial sharing/recording of results; 5) initial consensus development and adjusting of results; 6) initial reading and scoring of remaining learning outcomes; 7) reading and scoring of remaining transcripts; 8) sharing/recording results; 9) development of consensus and adjusting of results. This norming process, though used for the CPSA, is transferable to other rubrics where faculty have come together to collaborate on grading a shared assignment. It is most appropriate for higher education where, more often than not, faculty independence requires consensus over directive.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74282885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信