{"title":"Technology-Enhanced Items in Grades 1–12 English Language Proficiency Assessments","authors":"A. Kim, Rurik Tywoniw, Mark Chapman","doi":"10.1080/15434303.2022.2039659","DOIUrl":"https://doi.org/10.1080/15434303.2022.2039659","url":null,"abstract":"ABSTRACT Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students’ performance on TEIs versus MCIs in English language proficiency (ELP) assessments. This study examines the performance of Grades 1–12 English learners (ELs) on TEIs and MCIs in an online reading test. The test included TEI and MCI content-matched pairs that shared the same reading input but differed in response mode. We analyzed 1.2 million ELs’ scores across five different grade-level clusters: 1, 2–3, 4–5, 6–8, and 9–12. Items were evaluated for difficulty, discrimination, and information using Item Response Theory. Additionally, efficiency was investigated using the amount of information provided and item duration. TEIs were slightly more difficult than MCIs, but they did not differ in discriminative power. Notably, TEIs were more informative for ELs at higher grade or reading proficiency levels. TEIs generally had longer item durations than MCIs, making them less efficient, except for at Grades 6–8. Results provide insights for developing TEIs for ELP reading assessments.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46084543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of Bi-factor MIRT and Higher-order CDM Models to an In-house EFL Listening Test for Diagnostic Purposes","authors":"Shangchao Min, Hongwen Cai, Lianzhen He","doi":"10.1080/15434303.2021.1980571","DOIUrl":"https://doi.org/10.1080/15434303.2021.1980571","url":null,"abstract":"ABSTRACT The present study examined the performance of the bi-factor multidimensional item response theory (MIRT) model and higher-order (HO) cognitive diagnostic models (CDM) in providing diagnostic information and general ability estimation simultaneously in a listening test. The data used were 1,611 examinees’ item-level responses to an in-house EFL listening test in China and five content experts’ item-attribute coding results of the test form. The bi-factor MIRT model was compared with five CDMs with and without a higher-order structure in terms of model fit, attribute classification and general ability estimation. The results showed that the bi-factor MIRT model provided the best model-data fit, followed by the HO-G-DINA model, the saturated G-DINA model, and other reduced CDMs. The HO-G-DINA model produced attribute classification results more similar to the G-DINA model, whereas the bi-factor MIRT model offered better results in discriminating examinees’ general listening ability. The findings of this study highlighted the feasibility of using the bi-factor MIRT model as an attractive alternative for diagnostic assessment, especially in language assessment where attributes are assumed to be continuous.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45134984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Promoting Change in Language Teaching and Assessment at Policy and Practice Levels: An Interview with Hossein Farhady","authors":"P. Tavakoli","doi":"10.1080/15434303.2022.2038173","DOIUrl":"https://doi.org/10.1080/15434303.2022.2038173","url":null,"abstract":"ABSTRACT This interview highlights Professor Hossein Farhady’s academic life and his sustained contribution to research in the fields of English Language teaching, language testing and ESP over the past decades. Through a number of questions, the interview asks Professor Farhady about his pioneering role in the field of language testing in Iran and his highly valued contribution to issues related to English language teaching and material design. The interview takes the shape of a narrative about Professor Farhady’s past from early days of schooling in a village near Makou in Iran to his PhD studies at UCLA in California and his post-PhD research career in different countries around the globe. The interview summarises his accomplishment in leading research projects, disseminating research outputs, and training new teachers, researchers and language test and material designers. It also provides an interesting portrait of his vision for the field which is based on his experience of working in various higher education contexts over the past decades. Most importantly, the interview demonstrates Professor Farhady’s commitment to research and his dedication to translating research findings to action and bringing change about in policy and professional practice. Above all, the interview is a recognition of his lifetime achievements and an acknowledgement of his dedication, diligence and devotion to research in applied linguistics.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45897494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing English for Professional Purposes","authors":"Abdulrahman A. Alharthi","doi":"10.1080/15434303.2022.2049795","DOIUrl":"https://doi.org/10.1080/15434303.2022.2049795","url":null,"abstract":"","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44331539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validity Arguments in Language Testing: Case Studies of Validation Research","authors":"Kaizhou Luo, Jiayu Wang","doi":"10.1080/15434303.2022.2028151","DOIUrl":"https://doi.org/10.1080/15434303.2022.2028151","url":null,"abstract":"","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46941742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"L2 Speaking Assessment in Secondary School Classrooms in Japan","authors":"Rie Koizumi","doi":"10.1080/15434303.2021.2023542","DOIUrl":"https://doi.org/10.1080/15434303.2021.2023542","url":null,"abstract":"ABSTRACT In Japanese secondary schools, speaking assessment in English classrooms is designed, conducted, and scored by teachers. Although the assessment is intended to be used for summative and formative purposes, it is not regularly or adequately practiced. This paper reports the problems (i.e., lack of continuous speaking assessment, limited speaking test formats used, and reliability that is not ensured) and presents future directions for second language speaking assessment implementation to enhance the summative and formative uses of speaking assessments: improving teacher training and resources and discussing an effective speaking assessment framework.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44111887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Excellence is Not a Destination but a Journey! An Interview with Jacob Tharu","authors":"P. Shankar","doi":"10.1080/15434303.2021.2019258","DOIUrl":"https://doi.org/10.1080/15434303.2021.2019258","url":null,"abstract":"Jacob Tharu retired as a Professor from the Department of Testing and Evaluation Department of the Central Institute of English and Foreign Languages (currently, The English and Foreign Languages University), Hyderabad, India, in 2002, after a successful career of three decades in the field of language testing and assessment. As he was born to teachers, it was no surprise that Jim, as he is fondly called by his colleagues and friends, chose teaching as his career option, but not before dabbling in science and computers. After his undergraduate degree, Jim took the GRE “just to find out what the exam was like” and received a good score. He applied to universities in the U.S. for an MA degree “just for fun” and landed straight at the Harvard School of Education! After successful stints with Universities in the U.S and the U.K., Jim moved to India in 1967 and joined the Indian Institute of Technology, Kanpur, as a lecturer. Six years later, in 1973, Jim entered the portals of the Central Institute of English and Foreign Languages (CIEFL), Hyderabad. Jim is an amazing teacher, an excellent researcher, and a passionate thinker of educational and language assessment. He started the first full-fledged course in language testing titled, Testing Language and Literature in CIEFL in 1975. The seeds of the then little-known field of testing and language assessment were thus sown in India and since then research interest in the area has been ignited. He was a research supervisor for several illustrious students and has had a lasting impact on students, the editor of this journal, Prof. Antony Kunnan, being one of them. He has delivered keynote addresses and plenary talks at national and international conferences, served as a member on several national committees oriented towards educational reforms in India and has contributed immensely towards reshaping the education system and structuring the assessment contours in India. In this interview, Jim talks about his early school and college experiences, how the field of language testing started in India, and policy decisions that were made in universities and public institutions regarding language assessment.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49416053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of Language and Teaching Skill Domains for International Teaching Assistants: An Approach based on Invariant Measurement","authors":"Heesun Chang","doi":"10.1080/15434303.2021.2013487","DOIUrl":"https://doi.org/10.1080/15434303.2021.2013487","url":null,"abstract":"ABSTRACT Drawing on the framework of invariant measurement from Rasch measurement theory, the purpose of this study is to psychometrically evaluate the 20 language and teaching skill domains of the International Teaching Assistant (ITA) Test using the many-facet Rasch model and to empirically explore performance differences between females and males in these domains through bias analysis. The data came from the test scores of 110 prospective ITAs on the ITA Test at a large university. Three facets (examinee, rater, and domain) were Rasch-calibrated in FACETS. Despite some misfits, overall, the data fit the model reasonably well, confirming invariant measurement and the feasibility of assessing the language and teaching skill domains concurrently to produce a single score in ITA assessment. The results also indicated that overall language skills were more difficult than teaching skills. Grammar and pronunciation skills were found to be the most difficult domains, whereas the aural comprehension skill was found to be the easiest domain. A bias analysis revealed significant differences in the four domains between the two gender groups, calling for further research for examining potential gender biases in ITA assessment.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42307581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Multistage Testing to Enhance Measurement of an English Language Proficiency Test","authors":"David MacGregor, S. Yen, Xin Yu","doi":"10.1080/15434303.2021.1988953","DOIUrl":"https://doi.org/10.1080/15434303.2021.1988953","url":null,"abstract":"ABSTRACT How can one construct a test that provides accurate measurements across the range of performance levels while providing adequate coverage of all of the critical areas of the domain, yet that is not unmanageably long? This paper discusses the approach taken in a linear test of academic English language, and how the transition to a computer-based test allowed for a design that better fit the demands of the test. It also describes the multi-stage adaptive approach that was devised. This approach allows for a test that covers a broad range of performance levels while including items that assess the language of the content areas as described in the English language development standards underpinning the test. The design also allows for a test that is closely tailored to the ability level of the English learner taking the test, and that therefore produces a more precise measure. The efficacy of the design in enhancing measurement of two versions of a high-stakes English language assessments is explored, and the implications of the results are discussed.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42565545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}