{"title":"我的考试公平吗?","authors":"Denis G. Dumas, Yixiao Dong, Daniel M. McNeish","doi":"10.1027/1015-5759/a000724","DOIUrl":null,"url":null,"abstract":"Abstract. The degree to which test scores can support justified and fair decisions about demographically diverse participants has been an important aspect of educational and psychological testing for millennia. In the last 30 years, this aspect of measurement has come to be known as consequential validity, and it has sparked scholarly debate as to how responsible psychometricians should be for the fairness of the tests they create and how the field might be able to quantify that fairness and communicate it to applied researchers and other stakeholders of testing programs. Here, we formulate a relatively simple-to-calculate ratio coefficient that is meant to capture how well the scores from a given test can predict a criterion free from the undue influence of student demographics. We posit three example calculations of this Consequential Validity Ratio (CVR): one where the CVR is quite strong, another where the CVR is more moderate, and a third where the CVR is weak. We provide preliminary suggestions for interpreting the CVR and discuss its utility in instances where new tests are being developed, tests are being adapted to a new population, or the fairness of an established test has become an empirical question.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":"52 1","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"How Fair Is My Test?\",\"authors\":\"Denis G. Dumas, Yixiao Dong, Daniel M. McNeish\",\"doi\":\"10.1027/1015-5759/a000724\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract. The degree to which test scores can support justified and fair decisions about demographically diverse participants has been an important aspect of educational and psychological testing for millennia. In the last 30 years, this aspect of measurement has come to be known as consequential validity, and it has sparked scholarly debate as to how responsible psychometricians should be for the fairness of the tests they create and how the field might be able to quantify that fairness and communicate it to applied researchers and other stakeholders of testing programs. Here, we formulate a relatively simple-to-calculate ratio coefficient that is meant to capture how well the scores from a given test can predict a criterion free from the undue influence of student demographics. We posit three example calculations of this Consequential Validity Ratio (CVR): one where the CVR is quite strong, another where the CVR is more moderate, and a third where the CVR is weak. We provide preliminary suggestions for interpreting the CVR and discuss its utility in instances where new tests are being developed, tests are being adapted to a new population, or the fairness of an established test has become an empirical question.\",\"PeriodicalId\":48018,\"journal\":{\"name\":\"European Journal of Psychological Assessment\",\"volume\":\"52 1\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2022-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Psychological Assessment\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1027/1015-5759/a000724\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Psychological Assessment","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1027/1015-5759/a000724","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
Abstract. The degree to which test scores can support justified and fair decisions about demographically diverse participants has been an important aspect of educational and psychological testing for millennia. In the last 30 years, this aspect of measurement has come to be known as consequential validity, and it has sparked scholarly debate as to how responsible psychometricians should be for the fairness of the tests they create and how the field might be able to quantify that fairness and communicate it to applied researchers and other stakeholders of testing programs. Here, we formulate a relatively simple-to-calculate ratio coefficient that is meant to capture how well the scores from a given test can predict a criterion free from the undue influence of student demographics. We posit three example calculations of this Consequential Validity Ratio (CVR): one where the CVR is quite strong, another where the CVR is more moderate, and a third where the CVR is weak. We provide preliminary suggestions for interpreting the CVR and discuss its utility in instances where new tests are being developed, tests are being adapted to a new population, or the fairness of an established test has become an empirical question.
期刊介绍:
The main purpose of the EJPA is to present important articles which provide seminal information on both theoretical and applied developments in this field. Articles reporting the construction of new measures or an advancement of an existing measure are given priority. The journal is directed to practitioners as well as to academicians: The conviction of its editors is that the discipline of psychological assessment should, necessarily and firmly, be attached to the roots of psychological science, while going deeply into all the consequences of its applied, practice-oriented development.