B. Deygers, Martha Bigelow, Joseph Lo Bianco, Darshini Nadarajan, M. Tani
{"title":"Low Print Literacy and Its Representation in Research and Policy","authors":"B. Deygers, Martha Bigelow, Joseph Lo Bianco, Darshini Nadarajan, M. Tani","doi":"10.1080/15434303.2021.1903471","DOIUrl":"https://doi.org/10.1080/15434303.2021.1903471","url":null,"abstract":"ABSTRACT This paper constitutes an edited transcript of two online panels, conducted with four scholars whose complementary expertise regarding print literacy and migration offers a thought-provoking and innovative window on the representation of print literacy in applied linguistic research and in migration policy. The panel members are experts on language policy, literacy, proficiency and human capital research. Together, they address a range of interrelated matters: the constructs of language proficiency and literacy (with significant implication for assessment), the idea of literacy as human capital or as a human right, the urgent need for policy literacy among applied linguists, and the responsibility of applied linguistics in the literacy debate.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1903471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42008492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"English Language Proficiency Testing in Asia: A New Paradigm Bridging Global and Local Contexts","authors":"Davy Tran, Becky H. Huang","doi":"10.1080/15434303.2021.1903469","DOIUrl":"https://doi.org/10.1080/15434303.2021.1903469","url":null,"abstract":"","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1903469","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41985053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the Skills Involved in Reading Test Tasks through Expert Judgement and Verbal Protocol Analysis: Convergence and Divergence between the Two Methods","authors":"Xiaohua Liu, J. Read","doi":"10.1080/15434303.2021.1881964","DOIUrl":"https://doi.org/10.1080/15434303.2021.1881964","url":null,"abstract":"ABSTRACT Expert judgement has been frequently employed with reading assessments to gauge the skills potentially measured by test tasks, for purposes such as construct validation or producing diagnostic information. Despite the critical role it plays in such endeavours, few studies have triangulated its results with other types of data such as reported test-taking processes. A lack of such triangulation may bring the validity of experts’ judgements into question and undermine the credibility of subsequent procedures that build on them. In light of this, this study compared two groups of language experts’ judgements on the content of two sets of reading test tasks with ten university students’ verbal reports on solving those tasks. It was found that convergence was achieved between the two information sources for about 53% of the test tasks on what they were mainly assessing. However, there was a bigger gap between them regarding the specific skills involved in each task. A careful examination of the discrepancies between the two sources revealed that they are attributable to a number of factors. This study highlights the need to cross-check the results of expert judgement with other data sources. Implications for future test development and research are also discussed.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1881964","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49382043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Life for English Language Education: An Interview with Oryang Kwon","authors":"Oryang Kwon, Won-Key Lee","doi":"10.1080/15434303.2020.1859512","DOIUrl":"https://doi.org/10.1080/15434303.2020.1859512","url":null,"abstract":"引言","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1859512","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42235256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is Frequency Enough?: The Frequency Model in Vocabulary Size Testing","authors":"Brett Hashimoto","doi":"10.1080/15434303.2020.1860058","DOIUrl":"https://doi.org/10.1080/15434303.2020.1860058","url":null,"abstract":"ABSTRACT Modern vocabulary size tests are generally based on the notion that the more frequent a word is in a language, the more likely a learner will know that word. However, this assumption has been seldom questioned in the literature concerning vocabulary size tests. Using the Vocabulary of American-English Size Test (VAST) based on the Corpus of Contemporary American English (COCA), 403 English language learners were tested on a 10% systematic random sample of the first 5,000 most frequent words from that corpus. Pearson correlation between Rasch item difficulty (the probability that test-takers will know a word) and frequency was only r = 0.50 (r2 = 0.25). This moderate correlation indicates that the frequency of a word can only predict which words are known with only a limited degree of and that other factors are also affecting the order of acquisition of vocabulary. Additionally, using vocabulary levels/bands of 1,000 words as part of the structure of vocabulary size tests is shown to be questionable as well. These findings call into question the construct validity of modern vocabulary size tests. However, future confirmatory research is necessary to comprehensively determine the degree to which frequency of words and vocabulary size of learners are related.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1860058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45218924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Writing Process Features in an Adult EFL Writing Assessment Context: A Keystroke Logging Study","authors":"Ikkyu Choi, P. Deane","doi":"10.1080/15434303.2020.1804913","DOIUrl":"https://doi.org/10.1080/15434303.2020.1804913","url":null,"abstract":"ABSTRACT Keystroke logs provide a comprehensive record of observable writing processes. Previous studies examining the keystroke logs of young L1 English writers performing experimental writing tasks have identified writing processes features predictive of the quality of responses. Contrarily, large-scale studies on the dynamic and temporal nature of L2 writing process are scarce, especially in an assessment setting. This study utilized the keystroke logs of adult English as a foreign language (EFL) learners responding to assessment tasks to examine the usefulness of the process features in this new context. We evaluated the features in terms of stability, explored factor structures for their correlations, and constructed models to predict response quality. The results showed that most of the process features were stable and that their correlations could be efficiently represented with a five-factor structure. Moreover, we observed improved response quality prediction over a baseline by up to 48%. These findings have implications for the evaluation and understanding of writing process features and for the substantive understanding of writing processes under assessment conditions.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1804913","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42868108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mutleb Alnafisah, S. Baghestani, Abdulrahman A. Alharthi
{"title":"Local language testing: design, implementation, and development","authors":"Mutleb Alnafisah, S. Baghestani, Abdulrahman A. Alharthi","doi":"10.1080/15434303.2021.1897594","DOIUrl":"https://doi.org/10.1080/15434303.2021.1897594","url":null,"abstract":"Throughout the language testing literature, there is a clear distinction between standardized tests, which are produced by testing companies and designed to be used across multiple institutions, and local tests, which are developed and used at a specific institution but are larger in scale than a classroom test. Local language tests are important because they can be tailored to meet the needs of the local instructional context in terms of which constructs and ability levels they assess. Nevertheless, stakeholders who are in a position to develop local language tests (e.g., language instructors and program or level coordinators) often lack formal assessment training. Local Language Testing: Design, Implementation, and Development addresses this concern by offering accessible, comprehensive guidance for non-testing experts (as well as more seasoned language testers) who are interested in developing, administering, and maintaining local language assessments at their institution. Each chapter of the book illustrates various types of constraints and challenges local language testers may face and offers solutions that can be exploited according to the available resources and expertise. In addition, one of the main objectives of this book is to draw readers’ attention to the educational benefits of local language tests. A vital characteristic of local tests which the authors emphasize throughout the book is their basis in the instructional context, a sufficient understanding of which should dictate and guide the development, administration, and maintenance of the test. For this reason, the authors bring their personal experiences with four different local tests to offer real examples and practical advice on how their local contexts shaped and affected how they approached the development of the tests. These four local contexts are namely, the Oral English Proficiency Test (OEPT) at Purdue University, the Test of Oral English Proficiency for Academic Staff (TOEPAS) at the University of Copenhagen, the English Placement Test (EPT) at the University of Illinois at Urbana-Champaign, and the Assessment of College English, International (Ace-IN) at Purdue University. The first three chapters cover foundational principles for local testing and highlight the features that differentiate it from standardized testing and classroom assessment. The first chapter is an introductory chapter, and its take-away message is the centrality of understanding the local context (i.e., the educational goals and values at a particular institution or program) for successfully developing a local test. The second chapter discusses different aspects of local instructional contexts that influence language test design, such as the status of English and preferred instructional approaches. Understanding these variations enables test developers to better define and operationalize test constructs, enhancing the quality of the assessments. The third chapter introduces the authors’ conceptua","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1897594","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44325949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Text Authenticity in Listening Assessment: Can Item Writers Be Trained to Produce Authentic-sounding Texts?","authors":"Olena Rossi, Tineke Brunfaut","doi":"10.1080/15434303.2021.1895162","DOIUrl":"https://doi.org/10.1080/15434303.2021.1895162","url":null,"abstract":"ABSTRACT A long-standing debate in the testing of listening concerns the authenticity of the listening input. On the one hand, listening texts produced by item writers often lack spoken language characteristics. On the other hand, real-life recordings are often too context-specific to stand alone, or not suitable for item generation. In this study, we explored the effectiveness of an existing item-writing training course to produce authentic-sounding listening texts within the constraints of test specifications. Twenty-five trainees took an online item-writing course including training on creating authentic-sounding listening texts. Prior to and after the course, they developed a listening task. The resulting listening texts were judged on authenticity by three professional item reviewers and analysed linguistically by the researchers. Additionally, we interviewed the trainees following each item writing event and analysed their online discussions from during the course. Statistical comparison of the pre-and post-course authenticity scores revealed a positive effect of the training on item-writers’ ability to produce authentic-sounding listening texts, while the linguistic analysis demonstrated that the texts produced after the training contained more instances of spoken language. The interviews and discussions revealed that item writers’ awareness of spoken language features and their text production techniques influenced their ability to develop authentic-sounding texts.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1895162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43208448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Serendipitous: Lessons and Insights into Language Assessment from Catherine Elder","authors":"Interviewed by R Roz Hirch","doi":"10.1080/15434303.2020.1863967","DOIUrl":"https://doi.org/10.1080/15434303.2020.1863967","url":null,"abstract":"ABSTRACT The following interview was conducted with Catherine Elder in spring of 2020, at the beginning of the pandemic. Cathie has had a varied career in language testing, including work at universities in Australia and New Zealand and at the Language Testing Resource Center in Melbourne. In this interview, Cathie shares some highlights of her somewhat serendipitous career as well as lessons she has learned along the way and insights into possible future directions for language assessment.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1863967","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43215116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing Language, but What?: Examining the Carrier Content of IELTS Preparation Materials from a Critical Perspective","authors":"M. Noori, Seyyed-Abdolhamid Mirhosseini","doi":"10.1080/15434303.2021.1883618","DOIUrl":"https://doi.org/10.1080/15434303.2021.1883618","url":null,"abstract":"ABSTRACT The implicit sociocultural functioning of the content of high-stakes English language proficiency tests is a rarely-explored concern in language assessment. This study attempts to bring critical views of language testing and critical discourse studies together to examine the content of IELTS preparation materials in search of topics that are reflected and reproduced through this content. Fourteen sample tests (including reading texts, transcripts of listening files, speaking cue-cards, and writing topics) were investigated through a qualitative content analysis process. The emerging 663 coded episodes came together in four major categories of topics that shape the overall content of these IELTS practice books: Entertainment, Money, Nature, and Education, plus a miscellaneous set of less prominent topics. The findings indicate the discursive accentuation of specific aspects of these themes as well as certain patterns of the inclusion/exclusion of settings and participants. We argue that the discursive construction of such a content landscape can shape specific sociocultural orientations, and can naturalize and reproduce mental models and values far from the universal face of an international high-stakes test.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1883618","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43707062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}