{"title":"Meta-Analysis of Validity and Review of Alternate Form Reliability and Slope for Curriculum-Based Measurement in Science and Social Studies","authors":"Sarah J. Conoyer, W. Therrien, Kristen K. White","doi":"10.1177/1534508420978457","DOIUrl":"https://doi.org/10.1177/1534508420978457","url":null,"abstract":"Meta-analysis was used to examine curriculum-based measurement in the content areas of social studies and science. Nineteen studies between the years of 1998 and 2020 were reviewed to determine overall mean correlation for criterion validity and examine alternate-form reliability and slope coefficients. An overall mean correlation of .59 was found for criterion validity; however, there was significant heterogeneity across studies, suggesting curriculum-based measure (CBM) format or content area may affect findings. Low to high alternative form reliability correlation coefficients were reported across CBM formats between .21 and .89. Studies investigating slopes included mostly vocabulary-matching formats and reported a range from .12 to .65 correct items per week with a mean of .34. Our findings suggest that additional research in the development of these measures in validity, reliability, and slope is warranted.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"101 - 111"},"PeriodicalIF":1.3,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420978457","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47043950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian Barger, Emily C. Graybill, Andrew T. Roach, K. Lane
{"title":"Differential Item and Test Functioning of the SRSS-IE12 Across Race, Ethnicity, Gender, and Elementary Level","authors":"Brian Barger, Emily C. Graybill, Andrew T. Roach, K. Lane","doi":"10.1177/1534508420976830","DOIUrl":"https://doi.org/10.1177/1534508420976830","url":null,"abstract":"This study used item response theory (IRT) methods to investigate group differences in responses to the 12-item Student Risk Screening Scale–Internalizing and Externalizing (SRSS-IE12) in a sample of 3,837 U.S. elementary school students. Using factor analysis and graded response models from IRT methods, we examined the factor structure and the item and test functioning of the SRSS-IE12. The SRSS-IE12 internalizing and externalizing factors reflected the hypothesized two-factor model. IRT analyses indicated that SRSS-IE12 items and tests measure internalizing and externalizing traits similarly across students from different race, ethnicity, gender, and elementary level (K–Grade 2 vs. Grades 3–5) groups. Moreover, the mostly negligible differential item functioning (DIF) and differential test functioning (DTF) observed suggest these scales render equitable trait ratings. Collectively, the results provide further support for the SRSS-IE12 for universal screening in racially diverse elementary schools.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"79 - 88"},"PeriodicalIF":1.3,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420976830","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44364242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Outcome Measurement of School-Based SEL Intervention Follow-Up Studies","authors":"Sarah K. Ura, Sara Castro-Olivo, A. d’Abreu","doi":"10.1177/1534508419862619","DOIUrl":"https://doi.org/10.1177/1534508419862619","url":null,"abstract":"Recent meta-analyses confirm that social–emotional learning (SEL) interventions are effective in increasing academic, social, and emotional outcomes via direct skills instruction. With skill development serving as a primary mechanism of change in SEL interventions, we argue for the accurate measurement of skills as an important component of SEL research. Using the Collaborative for Academic, Social, and Emotional Learning (CASEL) model, we evaluate 111 studies included in a recent meta-analysis to determine the match between constructs targeted in interventions and SEL skill competency, as well as the measurement of skills and instruments used to evaluate programs. Findings indicate a general trend in the measurement of broad outcomes, rather than skills taught in programs, and limited measurement across CASEL five-competency model. Utility of measuring outcomes specific to competencies taught in intervention across SEL domains are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"76 - 81"},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419862619","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46305877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melissa A. Collier‐Meek, L. Sanetti, Lindsay M. Fallon, Sandra M. Chafouleas
{"title":"Exploring the Influences of Assessment Method, Intervention Steps, Intervention Sessions, and Observation Timing on Treatment Fidelity Estimates","authors":"Melissa A. Collier‐Meek, L. Sanetti, Lindsay M. Fallon, Sandra M. Chafouleas","doi":"10.1177/1534508419857228","DOIUrl":"https://doi.org/10.1177/1534508419857228","url":null,"abstract":"Treatment fidelity data are critical to evaluate intervention effectiveness, yet there are only general guidelines regarding treatment fidelity measurement. Initial investigations have found treatment fidelity data collected via direct observation to be more reliable than data collected via permanent product or self-report. However, the comparison of assessment methods is complicated by the intervention steps accounted for, observation timing, and intervention sessions accounted for, which may impact treatment fidelity estimates. In this study, we compared direct observation and permanent product data to evaluate these varied assessment and data collection decisions on treatment fidelity data estimates in three classrooms engaged in a group contingency intervention. Findings revealed that treatment fidelity estimates, in addition to being different across assessment method, are, in fact, different depending on the intervention steps assessed, intervention sessions accounted for, and observation timing. Implications for treatment fidelity assessment research, reporting in intervention research broadly, and implementation assessment in practice are described.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"3 - 13"},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419857228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43750273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examining the Technical Adequacy of the Social, Academic, and Emotional Behavior Risk Screener","authors":"S. Whitley, Yojanna Cuenca-Carlino","doi":"10.1177/1534508419857225","DOIUrl":"https://doi.org/10.1177/1534508419857225","url":null,"abstract":"Many schools attempt to identify and service students at risk for poor mental health outcomes within a multi-tiered system of support (MTSS). Universal screening within a MTSS requires technically adequate tools. The Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) has been put forth as a technically adequate screener. Researchers have examined the factor structure, diagnostic accuracy, criterion validity, and internal consistency of SAEBRS data. However, previous research has not examined its temporal stability or replicated the criterion validity results with a racially/ethnically diverse urban elementary school sample. This study examined the test–retest reliability, convergent validity, and predictive validity of teacher-completed SAEBRS ratings with racially/ethnically diverse group students enrolled in first through fifth grade in an urban elementary school. Reliability analyses resulted in significant test–retest reliability coefficients across four weeks for all SAEBRS scales. Furthermore, nonsignificant paired samples t tests were observed with the exception of the third-grade Emotional subscale. Validity analyses yielded significant concurrent and predictive Pearson correlation coefficients between SAEBRS ratings, oral reading fluency, and office discipline referrals. Limitations and implications of the results are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"67 - 75"},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419857225","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43432366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distinct and Overlapping Dimensions of Reading Motivation in Commonly Used Measures in Schools","authors":"S. Neugebauer, Ken A. Fujimoto","doi":"10.1177/1534508418819793","DOIUrl":"https://doi.org/10.1177/1534508418819793","url":null,"abstract":"The current investigation addresses critiques about motivation terminology and instrumentation by examining together three commonly used reading motivation assessments in schools. This study explores the distinctiveness and redundancies of the constructs operationalized in these reading motivation assessments with 222 middle school students, using item response theory. Results support distinctions between constructs grounded in self-determination theory, social cognitive theory, and expectancy-value theory as well as conceptual overlap, among constructs associated with competence beliefs and social sources of motivation across different theoretical traditions. Educational benefits of multidimensional and unidimensional interpretations of reading motivation constructs covered in these instruments are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"39 - 54"},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418819793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41420811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah J. Miller, G. Noell, Meredith T. Harris, Elise B. McIver, J. Alvarez
{"title":"Assessing the Effects of Instructional Set Size on Learning","authors":"Sarah J. Miller, G. Noell, Meredith T. Harris, Elise B. McIver, J. Alvarez","doi":"10.1177/1534508418825304","DOIUrl":"https://doi.org/10.1177/1534508418825304","url":null,"abstract":"Research evaluating the variables that influence learning has devoted inadequate attention to the influence of the amount of new material presented at one time. The current study evaluated the impact of varying instructional set size (ISS) on the rate at which elementary school students mastered multiplication facts while receiving constant time delay (CTD) instruction. Instructional time was equated across conditions. Instruction was provided for an ISS of five and 20 using CTD instruction for multiplication facts. ISS 20 was more efficient for two out of the three participants. This suggests a much larger efficient ISS than previous research. The implications of this finding for the importance of the instructional method in attempting to identify an efficient ISS, as well as the study’s connection to prior research, in this area are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"14 - 26"},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418825304","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48627415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is a Picture Worth 1,000 Words? Investigating Fraction Magnitude Knowledge Through Analysis of Student Representations","authors":"Stephanie Morano, P. Riccomini","doi":"10.1177/1534508418820697","DOIUrl":"https://doi.org/10.1177/1534508418820697","url":null,"abstract":"The present study examines the features and quality of visual representations (VRs) created by middle school students with learning disabilities and difficulties in mathematics in response to a released fraction item from the National Assessment of Educational Progress (NAEP). Relations between VR quality and scores on other measures of fraction knowledge are also investigated. Results show that students used circular area models most frequently to represent the NAEP item, but used bar models most accurately. Based on results, bar models may be the most efficient and effective area model VRs for use in fractions instruction. Representation quality was associated with problem-solving accuracy, as well as with performance on fraction number line estimation and fraction magnitude comparison. Implications for practice are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"27 - 38"},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418820697","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44378502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reconsidering the Psychometrics of the GRS-S: Evidence for Parsimony in Measurement","authors":"Y. Petscher, S. Pfeiffer","doi":"10.1177/1534508418824743","DOIUrl":"https://doi.org/10.1177/1534508418824743","url":null,"abstract":"The authors evaluated measurement-level, factor-level, item-level, and scale-level revisions to the Gifted Rating Scales–School Form (GRS-S). Measurement-level considerations tested the extent to which treating the Likert-type scale rating as categorical or continuous produced different fit across unidimensional, correlated trait, and bifactor latent factor structures. Item- and scale-level analyses demonstrated that the GRS-S could be reduced from a 72-item assessment on a 9-point rating scale down to a 30-item assessment on a 3-point rating scale. Reliability from the reduced assessment was high (ω > .95). Receiver operating characteristic (ROC) curve comparisons between the original and reduced versions of the GRS-S showed that diagnostic accuracy (i.e., area under the curve) of the scales was comparable when considering cut scores of 120, 125, and 130 on the WISC-IV Full Scale (Wechsler Intelligence Scale for Child–Fourth Edition) and verbal IQ and the WIAT-III (Wechsler Individual Achievement Test–Third Edition) composite score. The findings suggest that a brief form of the GRS-S can be used as a universal or selective screener for giftedness without sacrificing key psychometric considerations.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"55 - 66"},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418824743","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49554384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creation and Validation of German Oral Reading Fluency Passages With Immersion Language Learners","authors":"Kirsten W. Newell, Jessie M. Kember, G. Zinn","doi":"10.1177/1534508420972460","DOIUrl":"https://doi.org/10.1177/1534508420972460","url":null,"abstract":"This brief report summarizes the development and psychometric properties of German reading fluency passages as compared to English reading fluency passages for immersion language learners. Results indicated that scores from German language reading fluency passages alone were (a) somewhat less reliable than scores from English publisher-developed passages, (b) similarly valid measures of reading when compared to scores from English reading fluency passages, and (c) more accurate than publisher-provided English cut-scores but not as accurate as locally developed English cut-scores in the identification of at-risk readers.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"118 - 123"},"PeriodicalIF":1.3,"publicationDate":"2020-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420972460","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43966783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}