{"title":"Do Students Rapidly Guess Repeatedly over Time? A Longitudinal Analysis of Student Test Disengagement, Background, and Attitudes","authors":"J. Soland, Megan Kuhfeld","doi":"10.1080/10627197.2019.1645592","DOIUrl":"https://doi.org/10.1080/10627197.2019.1645592","url":null,"abstract":"ABSTRACT Considerable research has examined the use of rapid guessing measures to identify disengaged item responses. However, little is known about students who rapidly guess over the course of several tests. In this study, we use achievement test data from six administrations over three years to investigate whether rapid guessing is a stable trait-like behavior or if rapid guessing is determined mostly by situational variables. Additionally, we examine whether rapid guessing over the course of several tests is associated with certain psychological and background measures. We find that rapid guessing tends to be more state-like compared to academic achievement scores, which are fairly stable. Further, we show that repeated rapid guessing is strongly associated with students’ academic self-efficacy and self-management scores. These findings have implications for detecting rapid guessing and intervening to reduce its effect on observed achievement test scores.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1645592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46688474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intentional Professional Learning Design: Models, Tools, and the Synergies they Produce Supporting Teacher Growth","authors":"V. Mills, C. Harrison","doi":"10.1080/10627197.2020.1766961","DOIUrl":"https://doi.org/10.1080/10627197.2020.1766961","url":null,"abstract":"ABSTRACT The need and desire to understand and adopt formative assessment practices remain high on the agenda at all levels of educational systems around the world. To advance teachers’ use of formative assessment, research attention also needs to be paid to (a) understanding the challenges teachers face when asked to utilize formative assessment practices in subject-specific content areas and (b) to the development of appropriate and sufficiently powerful professional learning designs that can enable change for teachers. To begin addressing these needs, this paper offers a close examination of an intentionally designed professional learning (PL) series to help middle and high school Algebra I teachers understand the formative assessment process and then track and advance their classroom practice. The professional learning design, in this case, is based on a collaborative and formative approach to classroom practice and teacher change with high school mathematics teachers. Together, the PL model and tools provide a formative framework that bridges the theory-practice divide enabling teachers to conceptualize and then plan for, reflect on, and revise the ways in which new formative assessment practices are implemented in their classrooms. Through an analysis of the affordances and constraints of the PL design in practice, this paper provides insights into how discipline-specific professional learning can be better developed and supported throughout the teacher growth process.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1766961","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49341653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Kaufman, J. Engberg, L. Hamilton, Kun Yuan, H. Hill
{"title":"Validity Evidence Supporting Use of Anchoring Vignettes to Measure Teaching Practice","authors":"J. Kaufman, J. Engberg, L. Hamilton, Kun Yuan, H. Hill","doi":"10.1080/10627197.2019.1615374","DOIUrl":"https://doi.org/10.1080/10627197.2019.1615374","url":null,"abstract":"ABSTRACT High-quality measures of instructional practice are essential for research and evaluation of innovative instructional policies and programs. However, existing measures have generally proven inadequate because of cost and validity issues. This paper addresses two potential drawbacks of survey self-report measures: variation in teachers’ interpretation of response scales and their interpretation of survey questions. To address these drawbacks, researchers tested out use of “anchoring vignettes“ in teacher surveys to capture information about teaching practice, and they gathered validity evidence in regard to their use as a tool for adjusting teachers’ survey self-reports about their instructional practices for research purposes, or potentially to inform professional development. Data from 65 teachers in grades 4-9 responding to our survey suggested that vignette adjustments were reliable and valid for some instructional practices more than others. For some instructional practices, researchers found significant and high correlations between teachers’ adjusted survey self-rating, through use of anchoring vignettes, and previous observation ratings of teachers’ instruction, including ratings from several widely-used observation rubrics. These results suggest that anchoring vignettes may provide an efficient, cost-effective method for gathering data on teachers’ instruction.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1615374","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49264339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Patterns of Solution Behavior across Items in Low-Stakes Assessments","authors":"D. Pastor, Thai Q. Ong, S. Strickman","doi":"10.1080/10627197.2019.1615373","DOIUrl":"https://doi.org/10.1080/10627197.2019.1615373","url":null,"abstract":"ABSTRACT The trustworthiness of low-stakes assessment results largely depends on examinee effort, which can be measured by the amount of time examinees devote to items using solution behavior (SB) indices. Because SB indices are calculated for each item, they can be used to understand how examinee motivation changes across items within a test. Latent class analysis (LCA) was used with the SB indices from three low-stakes assessments to explore patterns of solution behavior across items. Across tests, the favored models consisted of two classes, with Class 1 characterized by high and consistent solution behavior (>90% of examinees) and Class 2 by lower and less consistent solution behavior (<10% of examinees). Additional analyses provided supportive validity evidence for the two-class solution with notable differences between classes in self-reported effort, test scores, gender composition, and testing context. Although results were generally similar across the three assessments, striking differences were found in the nature of the solution behavior pattern for Class 2 and the ability of item characteristics to explain the pattern. The variability in the results suggests motivational changes across items may be unique to aspects of the testing situation (e.g., content of the assessment) for less motivated examinees.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1615373","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44799562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sultan Turkan, Alexis A. López, René Lawless, Florencia Tolentino
{"title":"Using Pictorial Glossaries as an Accommodation for English Learners: An Exploratory Study","authors":"Sultan Turkan, Alexis A. López, René Lawless, Florencia Tolentino","doi":"10.1080/10627197.2019.1615371","DOIUrl":"https://doi.org/10.1080/10627197.2019.1615371","url":null,"abstract":"ABSTRACT In this article we explore the use of pictorial glossaries as an accommodation for English learners (ELs) with entry and emerging levels of English language proficiency. Drawing on survey responses from 98 middle school ELs and cognitive interviews with 10 of the survey participants, we examined the participants’ preferences and experiences with using accommodations and explored how some of them responded to NAEP mathematics items using pictorial glossaries. Our findings showed that the participants viewed the use of pictures, videos, and translations as useful, but they did not have a lot of experience using these types of accommodations. Also, we found that the pictorial glosses sometimes helped ELs with understanding the local meaning of words in the mathematics problems but not with the global meaning conveyed at the sentence level in the problems.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1615371","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48707267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Generalized Scoring Process to Measure Collaborative Problem Solving in Online Environments","authors":"C. Scoular, E. Care","doi":"10.1080/10627197.2019.1615372","DOIUrl":"https://doi.org/10.1080/10627197.2019.1615372","url":null,"abstract":"ABSTRACT Recent educational and psychological research has highlighted shifting workplace requirements and change required to equip the emerging workforce with skills for the 21st century. The emergence of these highlights the issues, and drives the importance, of new methods of assessment. This study addresses some of the issues by describing a scoring process for measuring collaborative problem solving (CPS) in online environments. The method presented, from conceptualization to implementation, centers on its generalizable application, presenting a systematic process of identifying, coding, and scoring behavior patterns in log stream data generated from assessments. Item Response Theory was used to investigate the psychometric properties of behavior patterns. The goal of this study was to present an approach that informs new measurement practices in relation to sociocognitive latent traits and their processes. The generalized scoring process provides an efficient approach to develop measures of social and cognitive skills in online environments.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1615372","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46399201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evelyn S. Johnson, Angela R. Crawford, Laura A. Moylan, Yuzhu Z. Zheng
{"title":"Validity of a Special Education Teacher Observation System","authors":"Evelyn S. Johnson, Angela R. Crawford, Laura A. Moylan, Yuzhu Z. Zheng","doi":"10.1080/10627197.2019.1702461","DOIUrl":"https://doi.org/10.1080/10627197.2019.1702461","url":null,"abstract":"ABSTRACT This manuscript describes the comprehensive validation work undertaken to develop the Recognizing Effective Special Education Teachers (RESET) observation system, which was designed to provide evaluations of special education teachers’ ability to effectively implement evidence-based practices and to provide specific, actionable feedback to teachers on how to improve instruction. Following the guidance for developing effective educator evaluation systems, we employed the Evidence-Centered Design framework, articulated the claims and inferences to be made with RESET, and conducted a series of studies to collect evidence to evaluate its validity. Our efforts and results to date are described, and implications for practice and further research are discussed.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1702461","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48948656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Soland, N. Jensen, Tran D. Keys, Sharon Bi, Emily Wolk
{"title":"Are Test and Academic Disengagement Related? Implications for Measurement and Practice","authors":"J. Soland, N. Jensen, Tran D. Keys, Sharon Bi, Emily Wolk","doi":"10.1080/10627197.2019.1575723","DOIUrl":"https://doi.org/10.1080/10627197.2019.1575723","url":null,"abstract":"ABSTRACT A vast literature investigates academic disengagement among students, including its ultimate manifestation, dropping out of school. Research also shows that test disengagement can be a problem for many inferences educators and policymakers wish to draw from test scores. However, few studies consider whether academic and test disengagement are related. In this study, we examine whether behaviors indicative of academic disengagement like chronic absenteeism and course failures are related to behaviors indicative of test disengagement like rapidly guessing on items. We also examine whether social-emotional factors like low academic self-efficacy and self-management, which research suggests are the root causes of academic disengagement, are also related to rapid guessing behavior. Our results provide evidence that academic and test disengagement are related, including through a common association with poor self-management. The implications of this connection for measurement and practice are discussed.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1575723","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45941028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Immediate and Delayed Effect of Dynamic Assessment Approaches on EFL Learners’ Oral Narrative Performance and Anxiety","authors":"Masoomeh Estaji, Mahsa Farahanynia","doi":"10.1080/10627197.2019.1578169","DOIUrl":"https://doi.org/10.1080/10627197.2019.1578169","url":null,"abstract":"ABSTRACT The present study aimed to investigate the effect of two major approaches of Dynamic Assessment, namely, interventionist and interactionist approaches, on learners’ oral narrative performance and anxiety. To this end, 34 Iranian EFL learners were assigned to an Interactionist Group (InA.G) and Interventionist Group (InV.G). Initially, both groups were given the Foreign Language Classroom Anxiety Scale and a pretest of speaking. In the treatment phase, the InV.G was asked to narrate a video and received instructions on their errors. The InA.G narrated the video while being provided with scaffolding during narration. Then both groups were given a posttest and, two weeks later, a delayed posttest. The results indicated that both groups’ oral performance significantly increased, while their anxiety reduced. In the end, a semi-structured interview was conducted whose results revealed that the InA.G experienced more anxiety mostly due to feeling a sense of interruption and losing face.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1578169","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44206486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scenario-Based Assessments in Writing: An Experimental Study","authors":"Mo Zhang, P. V. van Rijn, P. Deane, R. Bennett","doi":"10.1080/10627197.2018.1557515","DOIUrl":"https://doi.org/10.1080/10627197.2018.1557515","url":null,"abstract":"ABSTRACT Writing from source text is critical for developing college-and-career readiness because it is required in advanced academic environments and many vocations. Scenario-based assessment (SBA) represents one approach to measuring this ability. In such assessment, the scenario presents an issue that the student is to read and write about. Before writing, lead-in exercises are presented to encourage the examinee to engage with the source materials and to model the process used in a classroom writing project. This study experimentally manipulated a middle-school assessment design to understand if (1) the lead-in/essay structure increased scores erroneously with a concomitant decrease in test technical quality, and (2) the presence of a single unifying scenario affected scores or score meaning. In general, the SBA design did not appear to artificially increase total-test or essay scores. As importantly, it functioned as well as, sometimes better than, the alternative designs in terms of the measurement characteristics examined.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2018.1557515","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45041997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}