{"title":"C. Coombe, P. Davidson, B. O’Sullivan & S. Stoynoff. The Cambridge guide to second language assessment","authors":"Naoki Ikeda","doi":"10.58379/cwjj2079","DOIUrl":"https://doi.org/10.58379/cwjj2079","url":null,"abstract":"<jats:p>n/a</jats:p>","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90662435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Test of English as a Foreign Language (TOEFL): Interpretation of multiple score reports for ESL placement","authors":"Kateryna Kokhan","doi":"10.58379/zjdv7047","DOIUrl":"https://doi.org/10.58379/zjdv7047","url":null,"abstract":"The vast majority of U.S. universities nowadays accept TOEFL iBT scores for admission and placement into ESL classes. A significant number of candidates choose to repeat the test hoping to get higher results. Due to the significant increase in the number of international students, the University of Illinois at Urbana-Champaign (UIUC) is currently seeking to find the most cost-effective ESL placement policy which would regulate the ESL placement of TOEFL repeaters. Since there is little published research examining students’ multiple TOEFL iBT score reports, and there are no guidelines for the interpretation of multiple scores provided by the test publisher, this paper attempts to address the issue of interpretation and use of TOEFL iBT repeaters’ scores for making ESL placement decisions in the context of UIUC. The main research question considered in our study was: Which TOEFL iBT scores (official highest, most recent, average or self-reported scores) are the best predictors of ESL placement? The findings indicate that the self-reported and the highest TOEFL iBT scores have the strongest association with the ESL placement results. The self-reported and the highest scores also demonstrate the highest classification efficiency in predicting ESL placement of TOEFL iBT repeaters. The results and implications of the study are discussed.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73155134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concepts underpinning innovations to second language proficiency scales inclusive of Aboriginal and Torres Strait Islander learners: a dynamic process in progress ","authors":"Catherine Hudson , Denise Angelo","doi":"10.58379/wlrv4810","DOIUrl":"https://doi.org/10.58379/wlrv4810","url":null,"abstract":"This paper discusses the concepts underlying two proficiency scale innovations which include and describe the development of Aboriginal and Torres Strait Islander learners of Standard Australian English (SAE). Both scales, developed in Queensland, are adaptations of the National Languages and Literacy Institute of Australia (NLLIA) ESL Bandscales (McKay, Hudson, & Sapuppo, 1994). The revisions attempt to describe very complex terrain: the development of SAE by cohorts of Indigenous students, whose first languages are for the most part generated by language contact (English-lexified creoles or related varieties) in a range of language ecologies (second or foreign language or dialect learning situations), and who are undertaking their schooling in whole-class, mainstream curriculum contexts with SAE as the medium of instruction (Angelo, 2013). This work is of both national and international significance due to the growing awareness of the need for more valid language assessment of the diverse cohorts of students who have complex language backgrounds in relation to a standard language of education, such as non-standard dialects, contact languages, or ‘long-term’ language learners from indigenous or ethnic communities undergoing language shift. The concepts discussed suggest ways to capture students’ learning trajectories which are otherwise not visible in standardised L1 (literacy) assessments nor in typical L2 proficiency tools.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80507960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of word recognition skill in academic success across disciplines in an ELF university setting","authors":"M. Harrington, T. Roche","doi":"10.58379/qpew8104","DOIUrl":"https://doi.org/10.58379/qpew8104","url":null,"abstract":"Previous research (Harrington & Roche, 2014) showed that the Timed Yes/No Test (a measure of vocabulary size and response speed) is an effective tool for screening undergraduate students at risk of failure in English-as-a-Lingua-Franca (ELF) university settings. This study examines how well performance on the test predicts grade point averages across different academic disciplines in one of those contexts, an ELF university in Oman. First year students (N= 280) from four academic disciplines (Humanities, IT, Business and Engineering) completed Basic and Advanced versions of the Timed Yes/No Test. The predictive validity of word recognition accuracy (a proxy for size) and response time measures on GPA outcomes were examined independently and in combination. Two patterns emerged. Word accuracy was a better predictor of academic performance than response time for three of the groups, Engineering the exception, accounting for as much as 25% of variance in GPA. Response time accounted for no additional unique variance in the three groups after accuracy scores were accounted for. In contrast, accuracy was not a significant predictor of GPA for the Engineering group but response time was, accounting for 40% of the variance in academic performance. The findings are related to the use of the Timed Yes/No Test as a reliable and cost-effective screening tool in Post Enrolment Language Assessment (PELA) applications in ELF settings.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73443874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identification and assessment contexts of Aboriginal and Torres Strait Islander learners of Standard Australian English: Challenges for the language testing community","authors":"Denise Angelo","doi":"10.58379/xany2922","DOIUrl":"https://doi.org/10.58379/xany2922","url":null,"abstract":"The paper discusses the contexts of language backgrounds, language learning, policy and assessment relating to Aboriginal and Torres Strait Islander (Indigenous) students who are learning Standard Australian English (SAE) as an Additional Language or Dialect (EAL/D) in the state of Queensland. Complexities surrounding this cohort’s language situations and their language learning are explained in order to reveal why existing processes are not reliably identifying nor assessing those Indigenous students who are indeed EAL/D learners. In particular, it is argued, EAL/D processes and assessment instruments need to acknowledge and respond to the challenges posed by the rich and varied Indigenous language ecologies generated through language contact. System-level data does not disaggregate Indigenous EAL/D learners, nor correlate their levels of second language SAE proficiency with their academic performance data. Indigenous students are, however, over-represented in Queensland’s National Assessment Program – Literacy and Numeracy (NAPLAN) under-performance data and raising their performance is a national priority and targeted through many government initiatives. Indigenous students comprise a highly heterogeneous group in terms of their cultural, linguistic and schooling backgrounds, and Indigenous EAL/D learners, too, represent a diverse grouping which has only been included relatively recently in Australian second language assessment tools, and around which there has been little extensive discussion, despite significant complexity surrounding this cohort. This paper explores the background contextual issues involved in identifying and assessing Indigenous EAL/D learners equitably and reliably.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81255991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A framework for validating post-entry language assessments (PELAs)","authors":"U. Knoch, C. Elder","doi":"10.58379/yzlq8816","DOIUrl":"https://doi.org/10.58379/yzlq8816","url":null,"abstract":"<jats:p>n/a</jats:p>","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75347555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experimenting with a Japanese automated essay scoring system in the L2 Japanese environment","authors":"J. Imaki, S. Ishihara","doi":"10.58379/iphr9450","DOIUrl":"https://doi.org/10.58379/iphr9450","url":null,"abstract":"The purpose of this study is to provide an empirical analysis of the performance of an L1 Japanese automated essay scoring system which was on L2 Japanese compositions. In particular, this study concerns the use of such a system in formal L2 Japanese classes by the teachers (not in standardised tests). Thus experiments were designed accordingly. For this study, Jess, Japanese essay scoring system, was trialled using L2 Japanese compositions (n = 50). While Jess performed very well, being comparable with human raters in that the correlation between Jess and the average of the nine human raters is at least as high as the correlation between the 9 human raters, we also found: 1) that the performance of Jess is not as good as the reported performance of English automated essay scoring systems in the L2 environment and 2) that the very good compositions tend to be under-scored by Jess, indicating that Jess still has possible room for improvement.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73004976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the construct of radiotelephony communication: A critique of the ICAO English testing policy from the perspective of Korean aviation experts","authors":"Hyejeong Kim","doi":"10.58379/ywll7105","DOIUrl":"https://doi.org/10.58379/ywll7105","url":null,"abstract":"<jats:p>n/a</jats:p>","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85797308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"'Tone' and assessing writing: applying Bernstein and Halliday to the problem of implicit value assumptions in test constructs","authors":"J. Motteram","doi":"10.58379/cycz8697","DOIUrl":"https://doi.org/10.58379/cycz8697","url":null,"abstract":"The value assumptions of language test constructs are inherently difficult to identify. While language test scoring descriptors refer to concepts such as tone and appropriateness, the role of socially determined value assumptions in written language assessment has not been adequately modelled or discussed. This paper presents a framework to link the results of analysis of written test scripts with the value assumptions of the pedagogic discourse of the test.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83632007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concurrent and predictive validity of the Pearson Academic Test of English (PTE Academic)","authors":"M. Riazi","doi":"10.58379/vfsq6999","DOIUrl":"https://doi.org/10.58379/vfsq6999","url":null,"abstract":"This study examines the concurrent and predictive validity of the newly developed Pearson Test of English Academic (PTE Academic). The study involved 60 international university students who were non-native English speakers. The collected data included: the participants’ scores on a criterion test (IELTS Academic), their PTE Academic scores, and their academic performance as measured by their grade point average (GPA). The academic performance data of a similar norm group of native speakers were also collected. Results of the data analysis showed that there is a moderate to high significant correlation between PTE Academic and IELTS Academic overall, and also in terms of the four communication skills of listening, reading, speaking, and writing. Additionally, significant correlations were observed between the participants’ PTE Academic scores (overall and the four communication skills) and their academic performance. Results show that as the participants’ PTE Academic scores increased, their academic performance became on par or exceeded that of the norm group such that those in C1 and higher levels of the Common European Frame of Reference (CEFR) outperformed the norm group academically. Findings of this study provide useful implications for the testing community and higher education decision-makers.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82861450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}