Ethan R Van Norman, Peter M Nelson, David C Parker
{"title":"计算机自适应测试中增长估计的技术充分性:对进度监测的影响。","authors":"Ethan R Van Norman, Peter M Nelson, David C Parker","doi":"10.1037/spq0000175","DOIUrl":null,"url":null,"abstract":"<p><p>Computer adaptive tests (CATs) hold promise to monitor student progress within multitiered systems of support. However, the relationship between how long and how often data are collected and the technical adequacy of growth estimates from CATs has not been explored. Given CAT administration times, it is important to identify optimal data collection schedules to minimize missed instructional time. We used simulation methodology to investigate how the duration and frequency of data collection influenced the reliability, validity, and precision of growth estimates from a math CAT. A progress monitoring dataset of 746 Grade 4, 664 Grade 5, and 400 Grade 6 students from 40 schools in the upper Midwest was used to generate model parameters. Across grades, 53% of students were female and 53% were White. Grade level was not as influential as the duration and frequency of data collection on the technical adequacy of growth estimates. Low-stakes decisions were possible after 14-18 weeks when data were collected weekly (420-540 min of assessment), 20-24 weeks when collected every other week (300-360 min of assessment), and 20-28 weeks (150-210 min of assessment) when data were collected once a month, depending on student grade level. The validity and precision of growth estimates improved when the duration and frequency of progress monitoring increased. Given the amount of time required to obtain technically adequate growth estimates in the present study, results highlight the importance of weighing the potential costs of missed instructional time relative to other types of assessments, such as curriculum-based measures. Implications for practice, research, as well as future directions are also discussed. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 3","pages":"379-391"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Technical adequacy of growth estimates from a computer adaptive test: Implications for progress monitoring.\",\"authors\":\"Ethan R Van Norman, Peter M Nelson, David C Parker\",\"doi\":\"10.1037/spq0000175\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Computer adaptive tests (CATs) hold promise to monitor student progress within multitiered systems of support. However, the relationship between how long and how often data are collected and the technical adequacy of growth estimates from CATs has not been explored. Given CAT administration times, it is important to identify optimal data collection schedules to minimize missed instructional time. We used simulation methodology to investigate how the duration and frequency of data collection influenced the reliability, validity, and precision of growth estimates from a math CAT. A progress monitoring dataset of 746 Grade 4, 664 Grade 5, and 400 Grade 6 students from 40 schools in the upper Midwest was used to generate model parameters. Across grades, 53% of students were female and 53% were White. Grade level was not as influential as the duration and frequency of data collection on the technical adequacy of growth estimates. Low-stakes decisions were possible after 14-18 weeks when data were collected weekly (420-540 min of assessment), 20-24 weeks when collected every other week (300-360 min of assessment), and 20-28 weeks (150-210 min of assessment) when data were collected once a month, depending on student grade level. The validity and precision of growth estimates improved when the duration and frequency of progress monitoring increased. Given the amount of time required to obtain technically adequate growth estimates in the present study, results highlight the importance of weighing the potential costs of missed instructional time relative to other types of assessments, such as curriculum-based measures. Implications for practice, research, as well as future directions are also discussed. (PsycINFO Database Record</p>\",\"PeriodicalId\":88124,\"journal\":{\"name\":\"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association\",\"volume\":\"32 3\",\"pages\":\"379-391\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1037/spq0000175\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2016/8/8 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1037/spq0000175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2016/8/8 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
Technical adequacy of growth estimates from a computer adaptive test: Implications for progress monitoring.
Computer adaptive tests (CATs) hold promise to monitor student progress within multitiered systems of support. However, the relationship between how long and how often data are collected and the technical adequacy of growth estimates from CATs has not been explored. Given CAT administration times, it is important to identify optimal data collection schedules to minimize missed instructional time. We used simulation methodology to investigate how the duration and frequency of data collection influenced the reliability, validity, and precision of growth estimates from a math CAT. A progress monitoring dataset of 746 Grade 4, 664 Grade 5, and 400 Grade 6 students from 40 schools in the upper Midwest was used to generate model parameters. Across grades, 53% of students were female and 53% were White. Grade level was not as influential as the duration and frequency of data collection on the technical adequacy of growth estimates. Low-stakes decisions were possible after 14-18 weeks when data were collected weekly (420-540 min of assessment), 20-24 weeks when collected every other week (300-360 min of assessment), and 20-28 weeks (150-210 min of assessment) when data were collected once a month, depending on student grade level. The validity and precision of growth estimates improved when the duration and frequency of progress monitoring increased. Given the amount of time required to obtain technically adequate growth estimates in the present study, results highlight the importance of weighing the potential costs of missed instructional time relative to other types of assessments, such as curriculum-based measures. Implications for practice, research, as well as future directions are also discussed. (PsycINFO Database Record