{"title":"Hot or Not: The Role of Instructor Quality and Gender on the Formation of Positive Illusions among Students Using RateMyProfessors.com.","authors":"Katherine C. Theyson","doi":"10.7275/PXJD-0K69","DOIUrl":"https://doi.org/10.7275/PXJD-0K69","url":null,"abstract":"Existing literature indicates that physical attractiveness positively affects variables such as income, perceived employee quality and performance evaluations. Similarly, in the academic arena, studies indicate instructors who are better looking receive better teaching evaluations from their students. Previous analysis of the website RateMyProfessors.com confirms this, indicating that instructors who are viewed by students as “hot” receive higher “quality” ratings than those who are “not.” However, psychology literature indicates that perceptions of attractiveness are influenced by positive illusions, a property whereby individuals with higher quality relationships view each other more positively than objective observers. This paper uses data from Rate My Professors to investigate the existence of positive illusions in the instructor-student relationship. It finds that positive illusions exist, suggesting that existing literature overestimates the premium associated with physical attractiveness. Furthermore, the source of these illusions varies significantly between male and female instructors with important implications for the role of gender in workplace evaluations, hiring, promotion, and tenure.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88485185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Amrein-Beardsley, J. Holloway-Libell, A. Cirell, A. Hays, Kathryn P. Chapman
{"title":"“Rational” Observational Systems of Educational Accountability and Reform","authors":"A. Amrein-Beardsley, J. Holloway-Libell, A. Cirell, A. Hays, Kathryn P. Chapman","doi":"10.7275/TD4C-TR89","DOIUrl":"https://doi.org/10.7275/TD4C-TR89","url":null,"abstract":"There is something incalculable about teacher expertise and whether it can be observed, detected, quantified, and as per current educational policies, used as an accountability tool to hold America’s public school teachers accountable for that which they do (or do not do well). In this commentary, authors (all of whom are former public school teachers) argue that rubric-based teacher observational systems, developed to assess the extent to which teachers adapt and follow sets of rubric-based rules, might actually constrain teacher expertise. Moreover, authors frame their comments using the Dreyfus Model (1980, 1986) to illustrate how observational systems and the rational conceptions on which they are based might be stifling educational progress and reform.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85132465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Psychometric Changes on Item Difficulty Due to Item Review by Examinees.","authors":"E. Papanastasiou","doi":"10.7275/JCYV-K456","DOIUrl":"https://doi.org/10.7275/JCYV-K456","url":null,"abstract":"If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple choice exams, and to examine how these changes affect the estimation of item difficulty parameters. The results of this study have shown that item review by examinees does produce some changes to the examinee ability estimates and to the item difficulty parameters. In addition, these effects are more pronounced in shorter tests than in longer tests. In turn, these small changes produce larger effects when estimating the changes in the information values of each student’s test score.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79035884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear Logistic Test Modeling with R.","authors":"Purya Baghaei, K. Kubinger","doi":"10.7275/8F33-HZ58","DOIUrl":"https://doi.org/10.7275/8F33-HZ58","url":null,"abstract":"The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The applications of the model in test validation, hypothesis testing, cross-cultural studies of test bias, rule-based item generation, and investigating construct irrelevant factors which contribute to item difficulty are explained. The model is applied to an English as a foreign language reading comprehension test and the results are discussed. An important aspect of validity theory is ‘explaining’ the mental processes that are triggered when test items are solved. This is in contrast to ‘prediction’ which is based on the correlation of tests with external criteria (Messick, 1989, Embretson, 1998). cognitive","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76575108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Kennelly, D. Flannery, John Considine, E. Doherty, S. Hynes
{"title":"Modelling the Preferences of Students for Alternative Assignment Designs Using the Discrete Choice Experiment Methodology.","authors":"B. Kennelly, D. Flannery, John Considine, E. Doherty, S. Hynes","doi":"10.7275/Y9R2-NC06","DOIUrl":"https://doi.org/10.7275/Y9R2-NC06","url":null,"abstract":"This paper outlines how a discrete choice experiment (DCE) can be used to learn more about how students are willing to trade off various features of assignments such as the nature and timing of feedback and the method used to submit assignments. A DCE identifies plausible levels of the key attributes of a good or service and then presents the respondent with alternative bundles of these attributes and their levels and asks the respondent to choose between particular bundles. We report results from a DCE we conducted with undergraduate business students regarding their preferences for assignment systems. We find that the most important features of assignments are how relevant the assignments are for exam preparation and the nature of the feedback that students receive. We also find that students generally prefer online to paper assignments. We argue that the DCE approach has a lot of potential in education research.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76952397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justus J. Randolph, Kristina N. Falbe, Austin Kureethara Manuel, J. Balloun
{"title":"A Step-by-Step Guide to Propensity Score Matching in R","authors":"Justus J. Randolph, Kristina N. Falbe, Austin Kureethara Manuel, J. Balloun","doi":"10.7275/N3PV-TX27","DOIUrl":"https://doi.org/10.7275/N3PV-TX27","url":null,"abstract":"Propensity score matching is a statistical technique in which a treatment case is matched with one or more control cases based on each case’s propensity score. This matching can help strengthen causal arguments in quasi-experimental and observational studies by reducing selection bias. In this article we concentrate on how to conduct propensity score matching using an example from the field of education. Our goal is to provide information that will bring propensity score matching within the reach of research and evaluation practitioners.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76666505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sample Size Determination for Regression Models Using Monte Carlo Methods in R","authors":"A Alexander Beaujean","doi":"10.7275/D5PV-8V28","DOIUrl":"https://doi.org/10.7275/D5PV-8V28","url":null,"abstract":"Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87047813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study.","authors":"Shayna A. Rusticus, C. Lovato","doi":"10.7275/4S9M-4E81","DOIUrl":"https://doi.org/10.7275/4S9M-4E81","url":null,"abstract":"The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few recommendations exist regarding the appropriate use of these tests under varying data conditions. A simulation study was conducted to examine the power and Type I error rates of the confidence interval approach to equivalence testing under conditions of equal and non-equal sample sizes and variability when comparing two and three groups. It was found that equivalence testing performs best when sample sizes are equal. The overall power of the test is strongly influenced by the size of the sample, the amount of variability in the sample, and the size of the difference in the population. Guidelines are provided regarding the use of equivalence tests when analyzing non-optimal data.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77718894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating Unbiased Treatment Effects in Education Using a Regression Discontinuity Design.","authors":"William C. Smith","doi":"10.7275/7911-VD52","DOIUrl":"https://doi.org/10.7275/7911-VD52","url":null,"abstract":"The ability of regression discontinuity (RD) designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs) make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education Statistics to meet the prerequisites of a causal relationship. Unfortunately, the statistical complexity of the RD design has limited its application in education research. This article provides a less technical introduction to RD for education researchers and practitioners. Using visual analysis to aide conceptual understanding, the article walks readers through the essential steps of a Sharp RD design using hypothetical, but realistic, district intervention data and provides additional resources for further exploration.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82823717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quasi-Experiments in Schools: The Case for Historical Cohort Control Groups","authors":"Tamara Walser","doi":"10.7275/17HJ-1K58","DOIUrl":"https://doi.org/10.7275/17HJ-1K58","url":null,"abstract":"There is increased emphasis on using experimental and quasi-experimental methods to evaluate educational programs; however, educational evaluators and school leaders are often faced with challenges when implementing such designs in educational settings. Use of a historical cohort control group design provides a viable option for conducting quasi-experiments in school-based outcome evaluation. A cohort is a successive group that goes through some experience together, such as a grade level or a training program. A historical cohort comparison group is a cohort group selected from pre-treatment archival data and matched to a subsequent cohort currently receiving a treatment. Although prone to the same threats to study validity as any quasi-experiment, issues related to selection, history, and maturation can be particularly challenging. However, use of a historical cohort control group can reduce noncomparability of treatment and control conditions through local, focal matching. In addition, a historical cohort control group design can alleviate concerns about denying program access to students in order to form a control group, minimize resource requirements and disruption to school routines, and make use of archival data schools and school districts collect and find meaningful.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87229956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}