Language Testing最新文献

筛选
英文 中文
Assessing the speaking proficiency of L2 Chinese learners: Review of the Hanyu Shuiping Kouyu Kaoshi 第二语言汉语学习者口语能力的评估:《汉语口语测试》述评
IF 4.1 1区 文学
Language Testing Pub Date : 2023-04-27 DOI: 10.1177/02655322231163470
Albert W. Li
{"title":"Assessing the speaking proficiency of L2 Chinese learners: Review of the Hanyu Shuiping Kouyu Kaoshi","authors":"Albert W. Li","doi":"10.1177/02655322231163470","DOIUrl":"https://doi.org/10.1177/02655322231163470","url":null,"abstract":"I have seen a couple of international students that achieved good scores on the HSK level 5— the advanced-level Chinese proficiency test, and yet [they] can barely communicate at all in Chinese, not even daily conversation like “how was your weekend?” (A professor who teaches Chinese at a Confucius Institute in the USA, Interview, February 26, 2022)","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"1007 - 1021"},"PeriodicalIF":4.1,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47536879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speaking performances, stakeholder perceptions, and test scores: Extrapolating from the Duolingo English test to the university 演讲表现、利益相关者的看法和考试成绩:从多邻国英语考试到大学的推断
IF 4.1 1区 文学
Language Testing Pub Date : 2023-04-24 DOI: 10.1177/02655322231165984
Daniel R. Isbell, Dustin Crowther, H. Nishizawa
{"title":"Speaking performances, stakeholder perceptions, and test scores: Extrapolating from the Duolingo English test to the university","authors":"Daniel R. Isbell, Dustin Crowther, H. Nishizawa","doi":"10.1177/02655322231165984","DOIUrl":"https://doi.org/10.1177/02655322231165984","url":null,"abstract":"The extrapolation of test scores to a target domain—that is, association between test performances and relevant real-world outcomes—is critical to valid score interpretation and use. This study examined the relationship between Duolingo English Test (DET) speaking scores and university stakeholders’ evaluation of DET speaking performances. A total of 190 university stakeholders (45 faculty members, 39 administrative staff, 53 graduate students, 53 undergraduate students) evaluated the comprehensibility (ease of understanding) and academic acceptability of 100 DET test-takers’ speaking performances. Academic acceptability was judged based on speakers’ suitability for communicative roles in the university context including undergraduate study, group work in courses, graduate study, and teaching. Analyses indicated a large correlation between aggregate measures of comprehensibility and acceptability ( r = .98). Acceptability ratings varied according to role: acceptability for teaching was held to a notably higher standard than acceptability for undergraduate study. Stakeholder groups also differed in their ratings, with faculty tending to be more lenient in their ratings of comprehensibility and acceptability than undergraduate students and staff. Finally, both comprehensibility and acceptability measures correlated strongly with speakers’ official DET scores and subscores ( r ⩾ .74–.89), providing some support for the extrapolation of DET scores to academic contexts.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41653795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Establishing meaning recall and meaning recognition vocabulary knowledge as distinct psychometric constructs in relation to reading proficiency 将意义回忆和意义识别词汇知识建立为与阅读能力相关的不同心理测量结构
IF 4.1 1区 文学
Language Testing Pub Date : 2023-04-24 DOI: 10.1177/02655322231162853
J. Stewart, Henrik Gyllstad, Christopher Nicklin, Stuart Mclean
{"title":"Establishing meaning recall and meaning recognition vocabulary knowledge as distinct psychometric constructs in relation to reading proficiency","authors":"J. Stewart, Henrik Gyllstad, Christopher Nicklin, Stuart Mclean","doi":"10.1177/02655322231162853","DOIUrl":"https://doi.org/10.1177/02655322231162853","url":null,"abstract":"The purpose of this paper is to (a) establish whether meaning recall and meaning recognition item formats test psychometrically distinct constructs of vocabulary knowledge which measure separate skills, and, if so, (b) determine whether each construct possesses unique properties predictive of L2 reading proficiency. Factor analyses and hierarchical regression were conducted on results derived from the two vocabulary item formats in order to test this hypothesis. The results indicated that although the two-factor model had better fit and meaning recall and meaning recognition can be considered distinct psychometrically, discriminant validity between the two factors is questionable. In hierarchical regression models, meaning recognition knowledge did not make a statistically significant contribution to explaining reading proficiency over meaning recall knowledge. However, when the roles were reversed, meaning recall did make a significant contribution to the model beyond the variance explained by meaning recognition alone. The results suggest that meaning recognition does not tap into unique aspects of vocabulary knowledge and provide empirical support for meaning recall as a superior predictor of reading proficiency for research purposes.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48433203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modeling local item dependence in C-tests with the loglinear Rasch model 基于对数线性Rasch模型的C-检验局部项目依赖性建模
IF 4.1 1区 文学
Language Testing Pub Date : 2023-04-15 DOI: 10.1177/02655322231155109
Purya Baghaei, K. Christensen
{"title":"Modeling local item dependence in C-tests with the loglinear Rasch model","authors":"Purya Baghaei, K. Christensen","doi":"10.1177/02655322231155109","DOIUrl":"https://doi.org/10.1177/02655322231155109","url":null,"abstract":"C-tests are gap-filling tests mainly used as rough and economical measures of second-language proficiency for placement and research purposes. A C-test usually consists of several short independent passages where the second half of every other word is deleted. Owing to their interdependent structure, C-test items violate the local independence assumption of IRT models. This poses some problems for IRT analysis of C-tests. A few strategies and psychometric models have been suggested and employed in the literature to circumvent the problem. In this research, a new psychometric model, namely, the loglinear Rasch model, is used for C-tests and the results are compared with the dichotomous Rasch model where local item dependence is ignored. Findings showed that the loglinear Rasch model fits significantly better than the dichotomous Rasch model. Examination of the locally dependent items did not reveal anything as regards their contents. However, it did reveal that 50% of the dependent items were adjacent items. Implications of the study for modeling local dependence in C-tests using different approaches are discussed.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"820 - 827"},"PeriodicalIF":4.1,"publicationDate":"2023-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43887673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the predictive validity of the Duolingo English Test: Evidence from a major UK university 检验多邻国英语测试的预测效度:来自英国一所主要大学的证据
IF 4.1 1区 文学
Language Testing Pub Date : 2023-04-03 DOI: 10.1177/02655322231158550
T. Isaacs, Ruolin Hu, D. Trenkic, J. Varga
{"title":"Examining the predictive validity of the Duolingo English Test: Evidence from a major UK university","authors":"T. Isaacs, Ruolin Hu, D. Trenkic, J. Varga","doi":"10.1177/02655322231158550","DOIUrl":"https://doi.org/10.1177/02655322231158550","url":null,"abstract":"The COVID-19 pandemic has changed the university admissions and proficiency testing landscape. One change has been the meteoric rise in use of the fully automated Duolingo English Test (DET) for university entrance purposes, offering test-takers a cheaper, shorter, accessible alternative. This rapid response study is the first to investigate the predictive value of DET test scores in relation to university students’ academic attainment, taking into account students’ degree level, academic discipline, and nationality. We also compared DET test-takers’ academic performance with that of students admitted using traditional proficiency tests. Credit-weighted first-year academic grades of 1881 DET test-takers (1389 postgraduate, 492 undergraduate) enrolled at a large, research-intensive London university in Autumn 2020 were positively associated with DET Overall scores for postgraduate students (adj. r = .195) but not undergraduate students (adj. r = −.112). This result was mirrored in correlational patterns for students admitted through IELTS (n = 2651) and TOEFL iBT (n = 436), contributing to criterion-related validity evidence. Students admitted with DET enjoyed lower academic success than the IELTS and TOEFL iBT test-takers, although sample characteristics may have shaped this finding. We discuss implications for establishing cut scores and harnessing test-takers’ academic language development through pre-sessional and in-sessional support.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"748 - 770"},"PeriodicalIF":4.1,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45194036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Temporal fluency and floor/ceiling scoring of intermediate and advanced speech on the ACTFL Spanish Oral Proficiency Interview–computer ACTFL西班牙语口语能力面试中中级和高级语言的时间流利性和最低/最高分数——计算机
IF 4.1 1区 文学
Language Testing Pub Date : 2023-04-01 DOI: 10.1177/02655322221114614
Troy L. Cox, Alan V. Brown, Gregory L. Thompson
{"title":"Temporal fluency and floor/ceiling scoring of intermediate and advanced speech on the ACTFL Spanish Oral Proficiency Interview–computer","authors":"Troy L. Cox, Alan V. Brown, Gregory L. Thompson","doi":"10.1177/02655322221114614","DOIUrl":"https://doi.org/10.1177/02655322221114614","url":null,"abstract":"The rating of proficiency tests that use the Inter-agency Roundtable (ILR) and American Council on the Teaching of Foreign Languages (ACTFL) guidelines claims that each major level is based on hierarchal linguistic functions that require mastery of multidimensional traits in such a way that each level subsumes the levels beneath it. These characteristics are part of what is commonly referred to as floor and ceiling scoring. In this binary approach to scoring that differentiates between sustained performance and linguistic breakdown, raters evaluate many features including vocabulary use, grammatical accuracy, pronunciation, and pragmatics, yet there has been very little empirical validation on the practice of floor/ceiling scoring. This study examined the relationship between temporal oral fluency, prompt type, and proficiency level based on a data set comprised of 147 Oral Proficiency Interview - computer (OPIc) exam responses whose ratings ranged from Intermediate Low to Advanced High [AH]. As speakers progressed in proficiency, they were more fluent. In terms of floor and ceiling scoring, the prompts that elicited speech a level above the sustained level generally resulted in speech that was slower and had more breakdown than the floor-level prompts, though the differences were slight and not significantly different. Thus, temporal fluency features alone are insufficient in floor/ceiling scoring but are likely a contributing feature.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"325 - 351"},"PeriodicalIF":4.1,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47966349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The distribution of cognates and their impact on response accuracy in the EIKEN tests EIKEN测试中同源物的分布及其对反应准确性的影响
IF 4.1 1区 文学
Language Testing Pub Date : 2023-03-26 DOI: 10.1177/02655322231158551
David Allen, Keita Nakamura
{"title":"The distribution of cognates and their impact on response accuracy in the EIKEN tests","authors":"David Allen, Keita Nakamura","doi":"10.1177/02655322231158551","DOIUrl":"https://doi.org/10.1177/02655322231158551","url":null,"abstract":"Although there is abundant evidence for the use of first-language (L1) knowledge by bilinguals when using a second language (L2), investigation into the impact of L1 knowledge in large-scale L2 language assessments and discussion of how such impact may be controlled has received little attention in the language assessment literature. This study examines these issues through investigating the use of L1-Japanese loanword knowledge in test items targeting L2-English lexical knowledge in the Reading section of EIKEN grade-level tests, which are primarily taken by Japanese learners of English. First, the proportion of English target words that have loanwords in Japanese was determined through analysis of corpus-derived wordlists, revealing that the distribution of such items is broadly similar to that in language in general. Second, the impact of loanword frequency in Japanese (and cognate status) was demonstrated through statistical analysis of response data for the items. Taken together, the findings highlight the scope and impact of such cognate items in large-scale language assessments. Discussion centers on how test developers can and/or should deal with the inclusion of cognate words in terms of context validity and test fairness.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"771 - 795"},"PeriodicalIF":4.1,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47752709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring the development of general language skills in English as a foreign language—Longitudinal invariance of the C-test 衡量对外英语通用语言技能的发展——C测试的纵向不变性
IF 4.1 1区 文学
Language Testing Pub Date : 2023-03-25 DOI: 10.1177/02655322231159829
Birger Schnoor, J. Hartig, Thorsten Klinger, Alexander Naumann, I. Usanova
{"title":"Measuring the development of general language skills in English as a foreign language—Longitudinal invariance of the C-test","authors":"Birger Schnoor, J. Hartig, Thorsten Klinger, Alexander Naumann, I. Usanova","doi":"10.1177/02655322231159829","DOIUrl":"https://doi.org/10.1177/02655322231159829","url":null,"abstract":"Research on assessing English as a foreign language (EFL) development has been growing recently. However, empirical evidence from longitudinal analyses based on substantial samples is still needed. In such settings, tests for measuring language development must meet high standards of test quality such as validity, reliability, and objectivity, as well as allow for valid interpretations of change scores, requiring longitudinal measurement invariance. The current study has a methodological focus and aims to examine the measurement invariance of a C-test used to assess EFL development in monolingual and bilingual secondary school students (n = 1956) in Germany. We apply longitudinal confirmatory factor analysis to test invariance hypotheses and obtain proficiency estimates comparable over time. As a result, we achieve residual longitudinal measurement invariance. Furthermore, our analyses support the appropriateness of altering texts in a longitudinal C-test design, which allows for the anchoring of texts between waves to establish comparability of the measurements over time using the information of the repeated texts to estimate the change in the test scores. If used in such a design, a C-test provides reliable, valid, and efficient measures for EFL development in secondary education in bilingual and monolingual students in Germany.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"796 - 819"},"PeriodicalIF":4.1,"publicationDate":"2023-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48729791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Operationalizing the reading-into-writing construct in analytic rating scales: Effects of different approaches on rating 在分析评分量表中操作读写结构:不同方法对评分的影响
IF 4.1 1区 文学
Language Testing Pub Date : 2023-03-20 DOI: 10.1177/02655322231155561
Santi B. Lestari, Tineke Brunfaut
{"title":"Operationalizing the reading-into-writing construct in analytic rating scales: Effects of different approaches on rating","authors":"Santi B. Lestari, Tineke Brunfaut","doi":"10.1177/02655322231155561","DOIUrl":"https://doi.org/10.1177/02655322231155561","url":null,"abstract":"Assessing integrated reading-into-writing task performances is known to be challenging, and analytic rating scales have been found to better facilitate the scoring of these performances than other common types of rating scales. However, little is known about how specific operationalizations of the reading-into-writing construct in analytic rating scales may affect rating quality, and by extension score inferences and uses. Using two different analytic rating scales as proxies for two approaches to reading-into-writing construct operationalization, this study investigated the extent to which these approaches affect rating reliability and consistency. Twenty raters rated a set of reading-into-writing performances twice, each time using a different analytic rating scale, and completed post-rating questionnaires. The findings resulting from our convergent explanatory mixed-method research design show that both analytic rating scales functioned well, further supporting the use of analytic rating scales for scoring reading-into-writing. Raters reported that either type of analytic rating scale prompted them to attend to the reading-related aspects of reading-into-writing, although rating these aspects remained more challenging than judging writing-related aspects. The two scales differed, however, in the extent to which they led raters to uniform interpretations of performance difficulty levels. This study has implications for reading-into-writing scale design and rater training.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"684 - 722"},"PeriodicalIF":4.1,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44759278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ukrainian language proficiency test review 乌克兰语能力测试审查
IF 4.1 1区 文学
Language Testing Pub Date : 2023-03-14 DOI: 10.1177/02655322231156819
Daniil M. Ozernyi, Ruslan Suvorov
{"title":"Ukrainian language proficiency test review","authors":"Daniil M. Ozernyi, Ruslan Suvorov","doi":"10.1177/02655322231156819","DOIUrl":"https://doi.org/10.1177/02655322231156819","url":null,"abstract":"The Ukrainian Language Proficiency (ULP) test, officially titled Exam of the level of mastery of the official language (Ispyt na riven’ volodinnya derzhavnoyu movoyu) is a new test launched in Summer 2021. The name of the test in Ukrainian, incidentally, does not contain the words “Ukrainian” or “foreign language.” According to the state regulations (Kabinet Ministriv Ukrayiny [KMU], 2021a; Natsional’na Komisiya zi Standartiv Derzhavnoyi Movy [NKSDM], 2021a, 2021b), the levels of mastery of Ukrainian in the test are aligned with the CEFR levels.1 The test was introduced as a product of the law about the official language of Ukraine, which mandated that civil servants and citizens who are being naturalized are fully able to use Ukrainian in performing their duties. The ULP test comprises two versions: (a) ULP for acquisition of Ukrainian citizenship (Ispyt na riven’ volodinnya derzhavnoyu movoyu (dlya nabuttya hromadyanstva)), and (b) ULP 2.0 for holding civil office (Ispyt na riven’ volodinnya derzhavnoyu movoyu 2.0 (dlya vykonannya sluzhbovyh obov’yazkiv)). To differentiate between the two versions of the test in this review, we will refer to the former version as ULP-C and to the latter version as ULP 2.0. The purpose of this review is to apply Kunnan’s (2018) fairness and justice framework to evaluate both ULP-C and ULP 2.0 since they are united by (a) the alignment with the CEFR scale which poses ULP 2.0 as a continuation of ULP-C, (b) the same","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"828 - 839"},"PeriodicalIF":4.1,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43284530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信