{"title":"Fostering Self-Regulated Young Writers: Dynamic Assessment of Metacognitive Competence in Secondary School EFL Class","authors":"Yanhong Zhang, Jiao Xi","doi":"10.1080/15434303.2022.2103702","DOIUrl":"https://doi.org/10.1080/15434303.2022.2103702","url":null,"abstract":"ABSTRACT Research into metacognition has found it to facilitate self-regulation and correlate to learners’ L2 writing level. Following Lee & Mak’s (2018) framework of Metacognitive Instruction (MI) for L2 writing classrooms, this study applies Dynamic Assessment (DA) to writing MI (MI-DA) in a rural middle school EFL class in China. A one-semester comparative experimental study was conducted in two parallel Grade Seven classes (32 learners in each, taught by the same teacher) following a 3-step procedure: a nondynamic pretest and posttest for both control class (CC) and experimental class (EC) and an intervention phase, with CC receiving a score on written assignments and teacher’s comments while EC was provided with MI-DA intervention during pre-writing, writing, and revision. Ratings of student independent writing as well as interview data indicate that MI improved significantly students’ writing performance and metacognitive competence, influencing their attitude toward and confidence with writing. These goals, typically beyond the focus of most conventional assessments, are realized in DA through its commitment to taking account of the results of past development and those abilities that are ripening (i.e., future development).","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42763580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing a Scenario-Based English Language Assessment in an Asian University","authors":"Antony Kunnan, Coral Yiwei Qin, Cecilia Guanfang Zhao","doi":"10.1080/15434303.2022.2073886","DOIUrl":"https://doi.org/10.1080/15434303.2022.2073886","url":null,"abstract":"ABSTRACT A new computer-assisted test of academic English for use at an Asian University was commissioned by administrators. The test was designed to serve both placement and diagnostic purposes. The authors and their team conceptualized, developed, and administered a scenario-based assessment with an online delivery with independent and integrated language skills tasks. The project provided many advantages: (1) the test would be locally developed by university faculty and students who would have a good understanding of the test takers and the needs of the university, (2) the test would use topics, texts, and materials and technology that are socially and culturally appropriate and sensitive to the local context, and (3) the sustainability of the test would be higher as it were cost-effective in the long run in comparison to purchasing and renewing a license for an international test. This article documents the key considerations and processes in the development of this new scenario-based test of academic English that was conceptualized and designed by faculty and students collaboratively. It also discusses the challenges involved in the implementation of such a test, including resistance from local assessment culture and high workload of language teachers.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42455802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Looking Back and Forward: Language Assessment Insights from Constant Leung","authors":"Peter I. de Costa, C. Leung","doi":"10.1080/15434303.2022.2104722","DOIUrl":"https://doi.org/10.1080/15434303.2022.2104722","url":null,"abstract":"ABSTRACT In this interview piece, Peter De Costa and Matt Coss invite Constant Leung, LAQ co-editor (2017–2021) and active member of the LAQ editorial team since its inception in 2004, to highlight key milestones within the field of language assessment in general, and as they relate to major accomplishments of the journal. In addition, readers will also gain insight into anticipated developments within language assessment that extend contemporary trends, and thus advance the language assessment research agenda.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46666586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Test Review: LanguageCert IESOL B1 (Achiever) SELT","authors":"William S. Pearson","doi":"10.1080/15434303.2022.2103420","DOIUrl":"https://doi.org/10.1080/15434303.2022.2103420","url":null,"abstract":"ABSTRACT The present article reviews LanguageCert’s International English for Speakers of Other Languages (IESOL) Achiever Secure English Language Test (SELT). This high-stakes on-demand CEFR-linked exam has, since 2020, constituted Home Office-recognised evidence L2 English users can speak, write, and understand written and verbal English at B1 level. Passing the test facilitates enrolment onto a foundation or pre-sessional English course at a UK higher education institution, although some institutions set higher standards. As a neophyte SELT, there have been few descriptions and evaluations of the test beyond a range of sponsored studies. The current review indicated the Achiever test measures candidates’ general abilities to understand, interact, and produce tasks that mirror real life. However, a lack of ‘academicness’ and validity concerns in listening raise questions over its suitability for predicting readiness for tertiary study. The test offers the benefits of efficiency in registration and communicating results, remote proctoring and invigilation, and numerous sample materials in the public domain. The provision of an innovative re-sit option may prove favourable to candidates, although could encourage repeat test taking and attempts to pass by a narrow margin, rather than investments in language learning.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42431282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Fairness and Justice of External English Language Test Score Interpretation and Use for Japanese University Admission","authors":"H. Saito, Yasuyo Sawaki, Kiwamu Kasahara","doi":"10.1080/15434303.2022.2083965","DOIUrl":"https://doi.org/10.1080/15434303.2022.2083965","url":null,"abstract":"ABSTRACT The study evaluated a recently postponed national test policy on the use of external agencies’ English tests measuring four skills for Japanese university admission purposes. Using Kunnan’s principles of fairness and justice, we generated and evaluated three claims regarding the test policy: vocabulary coverage of the tests, the justifiability of the Common European Framework of References for Languages (CEFR)-based concordance table, and tests’ impact on teaching and learning productive skills. Scrutinizing the backing and rebuttal evidence from the test agencies’ voices to the key literature, we arrived at a partial acceptance of all three claims with the need for further research, calling into question a optimistic outlook underlying the test policy.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43794625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing Speaking in Context: Expanding the Construct and its Applications","authors":"Nichola Glasson","doi":"10.1080/15434303.2022.2095913","DOIUrl":"https://doi.org/10.1080/15434303.2022.2095913","url":null,"abstract":"By reviewing the key features of the theoretical underpinnings of existing assessment frameworks","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47967678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Investigation of the Impact of Jagged Profile on L2 Speaking Test Ratings: Evidence from Rating and Eye-tracking Data","authors":"Wenyue Ma, Paula M. Winke","doi":"10.1080/15434303.2022.2078720","DOIUrl":"https://doi.org/10.1080/15434303.2022.2078720","url":null,"abstract":"ABSTRACT The factors that influence rater scoring have been a subject of great interest to researchers in second language assessment. However, the research on the impact of test-takers’ speech profiles (e.g., a jagged or a flat profile reflecting analytic subscores) on raters’ scoring behaviors remains to be seen. To investigate the role of speech profiles in scoring, we collected analytic and holistic rating scores from 28 trained raters while they were marking the performances of three groups of speakers with distinct profiles, determined by prior ratings. We tracked eleven of the raters’ eye-movements to record how often and how long they looked at the various categories on the rating scales. We found that the raters perceived speakers who have better pronunciation as overall more competent speakers. Meanwhile, speakers’ score profiles influenced raters’ attention: raters fixated longer and more often, and made more eye-visits, to the lexical grammar category while assessing speakers with a jagged profile. Raters spent less time assessing the pronunciation of the speakers who were pre-identified as having better pronunciation. The findings shed light on the impact of speech characteristics on raters’ cognition and score assignments and therefore have important implications for rater training in L2 speaking assessments.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45956640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Fluctuating Effect of Thinking on Language Performance: New Evidence for the Island Ridge Curve","authors":"Yuyang Cai, Huilin Chen","doi":"10.1080/15434303.2022.2080553","DOIUrl":"https://doi.org/10.1080/15434303.2022.2080553","url":null,"abstract":"ABSTRACT Thinking skills play a critical role in determining language performance. Recent advancement in cognitive diagnostic modelling (CDM) provides a powerful tool for obtaining fine-grained information regarding these thinking skills during reading. Studies are scant, however, exploring the relations between thinking skills and language performance, not to mention studies examining the variation of this association with language proficiency. The current study explored this variation through the lens of the Island Ridge Curve (IRC). Drawing on an English reading test data by 2,285 students, we identified five thinking skills using CDM. Next, we followed guidelines of IRC and put students into four language proficiency groups to examine the relations of each skill identified through reading tasks to language performance across groups. Results of multi-group path analysis showed the effect of each skill identified through reading test fluctuated in the pattern of the IRC. The potential of IRC for examining the moderation of language proficiency on language factors is discussed.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42567040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the Effects of Task Type and Linguistic Background on Accuracy in Automated Speech Recognition Systems: Implications for Use in Language Assessment of Young Learners","authors":"L. Hannah, H. Kim, E. Jang","doi":"10.1080/15434303.2022.2038172","DOIUrl":"https://doi.org/10.1080/15434303.2022.2038172","url":null,"abstract":"ABSTRACT As a branch of artificial intelligence, automated speech recognition (ASR) technology is increasingly used to detect speech, process it to text, and derive the meaning of natural language for various learning and assessment purposes. ASR inaccuracy may pose serious threats to valid score interpretations and fair score use for all when it is exacerbated by test takers’ characteristics, such as language background and accent, and assessment task type. The present study investigated the extent to which speech-to-text accuracy rates of three major ASR systems vary across different oral tasks and students’ language background variables. Results indicate that task types and students’ language backgrounds have statistically significant main and interaction effects on ASR accuracy. The paper discusses the implications of the study results for applying ASR to computerized assessment design and automated scoring.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45194954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}