Language Testing最新文献

筛选
英文 中文
Measuring bilingual language dominance: An examination of the reliability of the Bilingual Language Profile 测量双语语言优势:双语语言概况可靠性的检验
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-12 DOI: 10.1177/02655322221139162
Daniel J. Olson
{"title":"Measuring bilingual language dominance: An examination of the reliability of the Bilingual Language Profile","authors":"Daniel J. Olson","doi":"10.1177/02655322221139162","DOIUrl":"https://doi.org/10.1177/02655322221139162","url":null,"abstract":"Measuring language dominance, broadly defined as the relative strength of each of a bilingual’s two languages, remains a crucial methodological issue in bilingualism research. While various methods have been proposed, the Bilingual Language Profile (BLP) has been one of the most widely used tools for measuring language dominance. While previous studies have begun to establish its validity, the BLP has yet to be systematically evaluated with respect to reliability. Addressing this methodological gap, the current study examines the reliability of the BLP, employing a test–retest methodology with a large (N = 248), varied sample of Spanish–English bilinguals. Analysis focuses on the test–retest reliability of the overall dominance score, the dominant and non-dominant global language scores, and the subcomponent scores. The results demonstrate that the language dominance score produced by the BLP shows “excellent” levels of test–retest reliability. In addition, while some differences were found between the reliability of global language scores for the dominant and non-dominant languages, and for the different subcomponent scores, all components of the BLP display strong reliability. Taken as a whole, this study provides evidence for the reliability of BLP as a measure of bilingual language dominance.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"521 - 547"},"PeriodicalIF":4.1,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45222587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Book Review: Reflecting on the Common European Framework of Reference for Languages and its companion volume 书评:反思欧洲语言参考框架及其配套卷
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-04 DOI: 10.1177/02655322221144788
Claudia Harsch
{"title":"Book Review: Reflecting on the Common European Framework of Reference for Languages and its companion volume","authors":"Claudia Harsch","doi":"10.1177/02655322221144788","DOIUrl":"https://doi.org/10.1177/02655322221144788","url":null,"abstract":"Aryadoust, V., Ng, L. Y., & Sayama, H. (2020). A comprehensive review of Rasch measurement in language assessment: Recommendations and guidelines for research. Language Testing, 38(1), 6–40. https://doi.org/10.1177/0265532220927487 Berrío, Á. I., Gómez-Benito, J., & Arias-Patiño, E. M. (2020). Developments and trends in research on methods of detecting differential item functioning. Educational Research Review, 31, Article 100340. https://doi.org/10.1016/j.edurev.2020.100340 Choi, Y.-J., & Asilkalkan, A. (2019). R packages for item response theory analysis: Descriptions and features. Measurement: Interdisciplinary Research and Perspectives, 17(3), 168–175. https://doi.org/10.1080/15366367.2019.1586404 Desjardins, C. D., & Bulut, O. (2018). Handbook of educational measurement and psychometrics using R. CRC Press. https://doi.org/10.1201/b20498 Linacre, J. M. (2022a). Facets computer program for many-facet Rasch measurement (Version 3.84.0). Winsteps. Linacre, J. M. (2022b). Winsteps® Rasch measurement computer program (Version 5.3.1). Winsteps. Luo, Y., & Jiao, H. (2017). Using the Stan program for Bayesian item response theory. Educational and Psychological Measurement, 78(3), 384–408. https://doi.org/10.1177/0013164417693666 Nicklin, C., & Vitta, J. P. (2022). Assessing Rasch measurement estimation methods across R packages with yes/no vocabulary test data. Language Testing, 39(4), 513–540. https://doi. org/10.1177/02655322211066822 Yildiz, H. (2021). IrtGUI: Item response theory analysis with a graphic user interface (R Package Version 0.2). https://CRAN.R-project.org/package=irtGUI","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"453 - 457"},"PeriodicalIF":4.1,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48666199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construct validity and fairness of an operational listening test with World Englishes 用《世界英语》构建一个操作性听力测试的效度和公平性
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-04 DOI: 10.1177/02655322221137869
H. Nishizawa
{"title":"Construct validity and fairness of an operational listening test with World Englishes","authors":"H. Nishizawa","doi":"10.1177/02655322221137869","DOIUrl":"https://doi.org/10.1177/02655322221137869","url":null,"abstract":"In this study, I investigate the construct validity and fairness pertaining to the use of a variety of Englishes in listening test input. I obtained data from a post-entry English language placement test administered at a public university in the United States. In addition to expectedly familiar American English, the test features Hawai’i, Filipino, and Indian English, which are expectedly less familiar to our test takers, but justified by the context. I used confirmatory factor analysis to test whether the category of unfamiliar English items formed a latent factor distinct from the other category of more familiar American English items. I used Rasch-based differential item functioning analysis to examine item biases as a function of examinees’ place of origin. The results from the confirmatory factor analysis suggested that the unfamiliar English items tapped into the same underlying construct as the familiar English items. The Rasch-based differential item functioning analysis revealed many instances of item bias among unfamiliar English items with higher proportions of item biases for items targeting narrow comprehension than broad comprehension. However, at the test level, the unfamiliar English items did not substantially influence raw total scores. These findings offer support for using a variety of Englishes in listening tests.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"493 - 520"},"PeriodicalIF":4.1,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47354788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The vexing problem of validity and the future of second language assessment 恼人的效度问题与第二语言评估的未来
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-01 DOI: 10.1177/02655322221125204
Vahid Aryadoust
{"title":"The vexing problem of validity and the future of second language assessment","authors":"Vahid Aryadoust","doi":"10.1177/02655322221125204","DOIUrl":"https://doi.org/10.1177/02655322221125204","url":null,"abstract":"Construct validity and building validity arguments are some of the main challenges facing the language assessment community. The notion of construct validity and validity arguments arose from research in psychological assessment and developed into the gold standard of validation/validity research in language assessment. At a theoretical level, construct validity and validity arguments conflate the scientific reasoning in assessment and policy matters of ethics. Thus, a test validator is expected to simultaneously serve the role of conducting scientific research and examining the consequential basis of assessments. I contend that validity investigations should be decoupled from the ethical and social aspects of assessment. In addition, the near-exclusive focus of empirical construct validity research on cognitive processing has not resulted in sufficient accuracy and replicability in predicting test takers’ performance in real language use domains. Accordingly, I underscore the significance of prediction in validation, in contrast to explanation, and propose that the question to ask might not so much be about what a test measures as what type of methods and tools can better generate language use profiles. Finally, I suggest that interdisciplinary alliances with cognitive and computational neuroscience and artificial intelligence (AI) fields should be forged to meet the demands of language assessment in the 21st century.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"8 - 14"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48647839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Test design and validity evidence of interactive speaking assessment in the era of emerging technologies 新兴技术时代交互式口语评价的测试设计与效度证据
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-01 DOI: 10.1177/02655322221126606
Soo Jung Youn
{"title":"Test design and validity evidence of interactive speaking assessment in the era of emerging technologies","authors":"Soo Jung Youn","doi":"10.1177/02655322221126606","DOIUrl":"https://doi.org/10.1177/02655322221126606","url":null,"abstract":"As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and the gathering of context-sensitive validity evidence of interactive speaking assessment. First, I review the current research on interactive speaking assessment focusing on commonly used test formats and types of validity evidence. Second, I discuss recent research that reports the use of artificial intelligence and technologies in teaching and assessing speaking in order to understand how and what evidence of interactive speaking is elicited. Based on the discussion, I argue that it is critical to identify what features of interactive speaking are elicited depending on the types of technology-mediated interaction for valid assessment decisions in relation to intended uses. I further discuss opportunities and challenges for future research on test design and eliciting validity evidence of interactive speaking using technology-mediated interaction.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"54 - 60"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44244212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forty years of Language Testing, and the changing paths of publishing 四十年的语言测试,以及出版路径的变化
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-01 DOI: 10.1177/02655322221136802
Paula M. Winke
{"title":"Forty years of Language Testing, and the changing paths of publishing","authors":"Paula M. Winke","doi":"10.1177/02655322221136802","DOIUrl":"https://doi.org/10.1177/02655322221136802","url":null,"abstract":"","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"3 - 7"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46083663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Epilogue—Note from an outgoing editor 结语——一位即将离任的编辑的注释
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-01 DOI: 10.1177/02655322221138339
L. Harding
{"title":"Epilogue—Note from an outgoing editor","authors":"L. Harding","doi":"10.1177/02655322221138339","DOIUrl":"https://doi.org/10.1177/02655322221138339","url":null,"abstract":"In this brief epilogue, outgoing editor Luke Harding reflects on his time as editor and considers the future Language Testing.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"204 - 205"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45426305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a new sophistication in vocabulary assessment 迈向词汇评估的新境界
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-01 DOI: 10.1177/02655322221125698
J. Read
{"title":"Towards a new sophistication in vocabulary assessment","authors":"J. Read","doi":"10.1177/02655322221125698","DOIUrl":"https://doi.org/10.1177/02655322221125698","url":null,"abstract":"Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them for widespread use. A great deal of work on vocabulary assessment can be seen in terms of the somewhat problematic distinction between breadth and depth of vocabulary knowledge. Breadth refers to assessing vocabulary size, based on a large sample of words from a frequency list. New research is raising questions about the suitability of word frequency norms derived from large corpora, the choice of the word family as the unit of analysis, the selection of appropriate test formats, and the role of guessing in test-taker performance. Depth of knowledge goes beyond the basic form-meaning link to consider other aspects of word knowledge. The concept of word association has played a dominant role in the design of such tests, but there is a need to create test formats to assess knowledge of word parts as well as a range of multi-word items apart from collocation.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"40 - 46"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48362352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Future challenges and opportunities in language testing and assessment: Basic questions and principles at the forefront 语言测试与评估的未来挑战与机遇:前沿的基本问题与原则
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-01 DOI: 10.1177/02655322221127896
Tineke Brunfaut
{"title":"Future challenges and opportunities in language testing and assessment: Basic questions and principles at the forefront","authors":"Tineke Brunfaut","doi":"10.1177/02655322221127896","DOIUrl":"https://doi.org/10.1177/02655322221127896","url":null,"abstract":"In this invited Viewpoint on the occasion of the 40th anniversary of the journal Language Testing, I argue that at the core of future challenges and opportunities for the field—both in scholarly and operational respects—remain basic questions and principles in language testing and assessment. Despite the high levels of sophistication of issues looked into, and methodological and operational solutions found, outstanding concerns still amount to: what are we testing, how are we testing, and why are we testing? Guided by these questions, I call for more thorough and adequate language use domain definitions (and a suitable broadening of research and testing methodologies to determine these), more comprehensive operationalizations of these domain definitions (especially in the context of technology in language testing), and deeper considerations of test purposes/uses and of their connections with domain definitions. To achieve this, I maintain that the field needs to continue investing in the topics of validation, ethics, and language assessment literacy, and engaging with broader fields of enquiry such as (applied) linguistics. I also encourage a more synthetic look at the existing knowledge base in order to build on this, and further diversification of voices in language testing and assessment research and practice.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"15 - 23"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43816042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Administration, labor, and love 行政、劳动和爱
IF 4.1 1区 文学
Language Testing Pub Date : 2023-01-01 DOI: 10.1177/02655322221127365
A. Ginther
{"title":"Administration, labor, and love","authors":"A. Ginther","doi":"10.1177/02655322221127365","DOIUrl":"https://doi.org/10.1177/02655322221127365","url":null,"abstract":"Great opportunities for language testing practitioners are enabled through language program administration. Local language tests lend themselves to multiple purposes—for placement and diagnosis, as a means of tracking progress, and as a contribution to program evaluation and revision. Administrative choices, especially those involving a test, are strategic and can be used to transform a program’s identity and effectiveness over time.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"31 - 39"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43447832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信