Amanda P. Goodwin, Y. Petscher, Jamie L. Tock, Sara E. McFadden, D. Reynolds, Tess Lantos, Sara Jones
{"title":"Monster, P.I.: Validation Evidence for an Assessment of Adolescent Language That Assesses Vocabulary Knowledge, Morphological Knowledge, and Syntactical Awareness","authors":"Amanda P. Goodwin, Y. Petscher, Jamie L. Tock, Sara E. McFadden, D. Reynolds, Tess Lantos, Sara Jones","doi":"10.1177/1534508420966383","DOIUrl":"https://doi.org/10.1177/1534508420966383","url":null,"abstract":"Assessment of language skills for upper elementary and middle schoolers is important due to the strong link between language and reading comprehension. Yet, currently few practical, reliable, valid, and instructionally informative assessments of language exist. This study provides validation evidence for Monster, P.I., which is a gamified, standardized, computer-adaptive assessment (CAT) of language for fifth to eighth grade students. Creating Monster, P.I. involved an assessment of the dimensionality of morphology and vocabulary and an assessment of syntax. Results using multiple-group item response theory (IRT) with 3,214 fifth through eighth graders indicated morphology and vocabulary were best assessed via bifactor models and syntax unidimensionally. Therefore, Monster, P.I. provides scores on three component areas of language (multidimensional morphology and vocabulary and unidimensional syntax) with the goal of informing instruction. Validity results also suggest that Monster, P.I. scores show moderate correlations with each other and with standardized reading vocabulary and reading comprehension assessments. Furthermore, hierarchical regression results suggest an important link between Monster, P.I. and standardized reading comprehension, explaining between 56% and 75% of the variance. Such results indicate that Monster, P.I. can provide meaningful understandings of language performance which can guide instruction that can impact reading comprehension performance.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"89 - 100"},"PeriodicalIF":1.3,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420966383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43550815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Whitehouse, Songtian Zeng, R. Troeger, A. Cook, T. Minami
{"title":"Examining Measurement Invariance of a School Climate Survey Across Race and Ethnicity","authors":"A. Whitehouse, Songtian Zeng, R. Troeger, A. Cook, T. Minami","doi":"10.1177/1534508420966390","DOIUrl":"https://doi.org/10.1177/1534508420966390","url":null,"abstract":"Positive school climate is a key determinant factor of students’ psychological well-being, safety, and academic achievement. Although researchers have examined the validity of school climate measures, there is a dearth of research investigating differences in student perceptions of school climate across race and ethnicity. This study evaluated the factor stability of a widely used school climate survey using factor analyses and measurement invariance techniques across racial/ethnic groups. Results of a confirmatory factor analysis indicated a five-factor structure for a school climate survey, and weak measurement invariance was found across Hispanic, Black, and White student groups (ΔCFI = .008). According to paired t tests, significant differences were found among racial/ethnic respondent groups across two factors: teacher and school effectiveness and sense of belonging and care. Validated school climate measures that are culturally and racially responsive to students’ experiences allow for accurate interpretations of school climate data. Discussion and implications are provided.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"37 - 46"},"PeriodicalIF":1.3,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420966390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46809365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephanie M. Hammerschmidt‐Snidarich, Dana L. Wagner, David C. Parker, Kyle Wagner
{"title":"Reading Tutors’ Interpretation of Curriculum-Based Measurement Graphs","authors":"Stephanie M. Hammerschmidt‐Snidarich, Dana L. Wagner, David C. Parker, Kyle Wagner","doi":"10.1177/1534508420963193","DOIUrl":"https://doi.org/10.1177/1534508420963193","url":null,"abstract":"This study examined reading tutors’ interpretation of reading progress-monitoring graphs. A think-aloud procedure was used to evaluate tutors at two points in time, before and after a year of service as an AmeriCorps reading tutor. During their service, the reading tutors received extensive training and ongoing coaching. Descriptive results showed a positive change from the Time 1–think-aloud (pretest) to the Time 2–think aloud (posttest). There were statistically significant changes from Time 1 to Time 2 for the majority of graph interpretation variables measured. Data suggest that the right type of support and training may serve to enable reading tutors to develop the skills to contribute to data-based decision-making within multitiered systems.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"26 - 36"},"PeriodicalIF":1.3,"publicationDate":"2020-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420963193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48065657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rhea Wagle, E. Dowdy, M. Furlong, Karen Nylund-Gibson, D. Carter, T. Hinton
{"title":"Anonymous Versus Self-Identified Response Formats for School Mental Health Screening","authors":"Rhea Wagle, E. Dowdy, M. Furlong, Karen Nylund-Gibson, D. Carter, T. Hinton","doi":"10.1177/1534508420959439","DOIUrl":"https://doi.org/10.1177/1534508420959439","url":null,"abstract":"Schools are an essential setting for mental health supports and services for students. To support student well-being, schools engage in universal mental health screening to identify students in need of support and to provide surveillance data for district-wide or state-wide policy changes. Mental health data have been collected via anonymous and self-identified response formats depending on the purpose of the screening (i.e., surveillance and screening, respectively). However, most surveys do not provide psychometric evidence for use in both types of response formats. The current study examined whether responses to the Social Emotional Health Survey–Secondary (SEHS-S), a school mental health survey, are comparable when administered using anonymous versus self-identified response formats. The study participants were from one high school and completed the SEHS-S using self-identified (n = 1,700) and anonymous (n = 1,667) formats. Full measurement invariance was found across the two response formats. Both substantial and minimal latent mean differences were detected. Implications for the use and interpretation of the SEHS-S for schoolwide mental health are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"112 - 117"},"PeriodicalIF":1.3,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420959439","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43243744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David A. Klingbeil, Ethan R. Van Norman, Peter M. Nelson
{"title":"Using Interval Likelihood Ratios in Gated Screening: A Direct Replication Study","authors":"David A. Klingbeil, Ethan R. Van Norman, Peter M. Nelson","doi":"10.1177/1534508420953894","DOIUrl":"https://doi.org/10.1177/1534508420953894","url":null,"abstract":"This direct replication study compared the use of dichotomized likelihood ratios and interval likelihood ratios, derived using a prior sample of students, for predicting math risk in middle school. Data from the prior year state test and the Measures of Academic Progress were analyzed to evaluate differences in the efficiency and diagnostic accuracy of gated screening decisions. Post-test probabilities were interpreted using a threshold decision-making model to classify student risk during screening. Using interval likelihood ratios led to fewer students requiring additional testing after the first gate. But, when interval likelihood ratios were used, three tests were required to classify 6th- and 7th-grade students as at-risk or not at-risk. Only two tests were needed to classify students as at-risk or not at-risk when dichotomized likelihood ratios were used. Acceptable sensitivity and specificity estimates were obtained, regardless of the type of likelihood ratios used to estimate post-test probabilities. When predicting academic risk, interval likelihood ratios may be best reserved for situations where at least three successive tests are available to be used in a gated screening model.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"14 - 25"},"PeriodicalIF":1.3,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420953894","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49495290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Greenwood, J. Buzhardt, D. Walker, Fan Jia, J. Carta
{"title":"Criterion Validity of the Early Communication Indicator for Infants and Toddlers","authors":"C. Greenwood, J. Buzhardt, D. Walker, Fan Jia, J. Carta","doi":"10.1177/1534508418824154","DOIUrl":"https://doi.org/10.1177/1534508418824154","url":null,"abstract":"The Early Communication Indicator (ECI) is a progress monitoring measure designed to support intervention decisions of the home visitors and early educators who serve infants and toddlers. The present study sought to add to the criterion validity claims of the ECI in a large sample of children using measures of language and preliteracy not previously investigated. Early Head Start service providers administered and scored ECIs quarterly for infants and toddlers in their caseload as part of standard services. In addition, a battery of language and early literacy criterion tests were administered by researchers when children were 12, 24, 36, and 48 months of age. Analyses of this longitudinal data then examined concurrent and predictive correlational patterns. Results indicated that children grew in communicative proficiency with age, and weak to moderately strong patterns of relationship emerged that differed by ECI scale, age, and criterion measure. The strongest positive patterns of relationships were between Single Words and Multiple Words and the criterion at older ages. Gestures and Vocalizations established a pattern of negative relationships to the criterion measures. Implications for research and practice are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"45 1","pages":"298 - 310"},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418824154","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45409408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Daily, K. Zullig, E. M. Myers, Megan L. Smith, A. Kristjansson, M. J. Mann
{"title":"Preliminary Validation of the SCM in a Sample of Early Adolescent Public School Children","authors":"S. Daily, K. Zullig, E. M. Myers, Megan L. Smith, A. Kristjansson, M. J. Mann","doi":"10.1177/1534508418815751","DOIUrl":"https://doi.org/10.1177/1534508418815751","url":null,"abstract":"The school climate measure (SCM) has demonstrated robust psychometrics in regionally diverse samples of high school–aged adolescents, but remains untested among early adolescents. Confirmatory factor analysis was used to establish construct validity and measurement indices of the SCM using a sample of early adolescents from public schools located in Central Appalachia (n = 1,128). In addition, known-groups validity analyzed each SCM domain against self-reported academic achievement and school connection. Analyses confirmed all 10 SCM domains fit the data well with strong internal consistency and factor loadings. Known-groups analyses suggest students who reported higher academic achievement and school connection demonstrated higher perceptions of school climate. Findings provide evidence that extends the use of the SCM to early adolescents and may support school-based policy.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"45 1","pages":"288 - 297"},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418815751","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65474632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin L. Hojnoski, Kristen Missall, Brenna K. Wood
{"title":"Measuring Engagement in Early Education: Preliminary Evidence for the Behavioral Observation of Students in Schools–Early Education:","authors":"Robin L. Hojnoski, Kristen Missall, Brenna K. Wood","doi":"10.25384/SAGE.C.4348757.V1","DOIUrl":"https://doi.org/10.25384/SAGE.C.4348757.V1","url":null,"abstract":"Engagement in early childhood is defined as a child’s level of participation with the environment. Engagement is an important construct in assessment and intervention of social and early learning c...","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"45 1","pages":"243-254"},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45514271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin L. Hojnoski, Kristen Missall, Brenna K. Wood
{"title":"Measuring Engagement in Early Education: Preliminary Evidence for the Behavioral Observation of Students in Schools–Early Education","authors":"Robin L. Hojnoski, Kristen Missall, Brenna K. Wood","doi":"10.1177/1534508418820125","DOIUrl":"https://doi.org/10.1177/1534508418820125","url":null,"abstract":"Engagement in early childhood is defined as a child’s level of participation with the environment. Engagement is an important construct in assessment and intervention of social and early learning competence given its link to school achievement. Few tools exist to assess engagement of young children in early education, and there is a need for a systematic direct observation tool that can be applied universally (e.g., with all young children) in these settings. This article describes preliminary evidence of validity and reliability for the Behavioral Observation of Students in Schools–Early Education (BOSS-EE). Specifically, the article describes results from a survey of experts and practitioners in which feedback was solicited on target behaviors and operational definitions, presents reliability data (i.e., interobserver and test–retest), examines correlations with a criterion measure, and describes variability across settings, sites, and methods (i.e., video vs. in vivo). Next steps in measurement development are discussed with attention to the challenges of producing a tool that can be used in a range of early education settings with diverse groups of young children.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"45 1","pages":"243 - 254"},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418820125","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45491672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aspects of Technical Adequacy of an Early-Writing Measure for English Language Learners in Grades 1 to 3","authors":"R. A. Smith, E. Lembke","doi":"10.1177/1534508420947157","DOIUrl":"https://doi.org/10.1177/1534508420947157","url":null,"abstract":"This study examined the technical adequacy of Picture Word, a type of Writing Curriculum-Based Measurement, with 73 English learners (ELs) with beginning to intermediate English language proficiency in Grades 1, 2, and 3. The ELs in this study attended schools in one midwestern U.S. school district employing an English-only model of instruction and spoke a variety of native languages. ELs completed two forms of Picture Word in the fall, winter, and spring. The criterion measure, a common English language proficiency assessment, was administered in the winter. Results indicated that Picture Word was not appropriate for the first-grade EL participants but showed promise for second- and third-grade ELs.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"59 - 63"},"PeriodicalIF":1.3,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420947157","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45842540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}