{"title":"Codes of Behavior in IELTS Speaking Interview--A Study of Georgian IELTS instructors","authors":"H. Ghaemi","doi":"10.23977/langta.2022.050101","DOIUrl":"https://doi.org/10.23977/langta.2022.050101","url":null,"abstract":"The purpose of the current study was to construct a Codes of Behaviour Questionnaire for IELTS Speaking Interview. To this end, the questionnaire was designed by selecting the most significant factors of behavioural issues in IELTS Interviewing based on quantitative approach. The scale which consisted of four main categories, are (1) Values system factor, (2) Fairness, (3) Content factor, and (4) Interpersonal relationship, along with 28 items. After employing EFA and CFA, it was revealed that the questionnaire consists of high validity. Moreover, the reliability of the questionnaire was assessed by running Cronbach's Alpha which was .825. As an alternative to the traditional approach, this study utilized structural equation modelling (SEM) with multiple indicators to examine the validity and reliability of Codes of Behaviour in IELTS Interviewing Questionnaire. Finally, statistical results and implications were discussed.","PeriodicalId":242888,"journal":{"name":"Journal of Language Testing & Assessment","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131444060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Item Bank Stability through Live and Simulated Datasets","authors":"Tony Lee, David Coniam, M. Milanovic","doi":"10.23977/langta.2022.050102","DOIUrl":"https://doi.org/10.23977/langta.2022.050102","url":null,"abstract":"LanguageCert manages the construction of its tests, exams and assessments using a sophisticated item banking system which contains large amounts of test material that is described, inter alia, in terms of content characteristics such as macroskills, grammatical and lexical features and measurement characteristics such as Rasch difficulty estimates and fit statistics. In order to produce content and difficulty equivalent test forms, it is vital that the items in any LanguageCert bank manifest stable measurement characteristics. The current paper is one of two linked studies exploring the stability of one of the item banks developed by LanguageCert [Note 1]. This particular bank has been used as an adaptive test bank and comprises 820 calibrated items. It has been administered to over 13,000 test takers, each of whom have taken approximately 60 items. The purpose of these two exploratory studies is to examine the stability of this adaptive test item bank from both statistical and operational perspectives. The study compares test taker performance in the live dataset with over 13,000 test takers (where each test taker takes approximately 60 items) with a simulated ‘full’ dataset generated using model-based imputation. Simulation regression lines showed a good match and Rasch fit statistics were also good: thus indicating that items comprising the adaptive item bank are of high quality both in terms of content and statistical stability. Potential future stability was confirmed by results obtained from a Bayesian ANOVA. As mentioned above, such item bank stability is important when item banks are used for multiple purposes, in this context for adaptive testing and the construction of linear tests. The current study therefore lays the ground work for a follow-up study where the utility of this adaptive test item bank is verified by the construction, administration and analysis of a number of linear tests.","PeriodicalId":242888,"journal":{"name":"Journal of Language Testing & Assessment","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130169599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Delivery of Speaking Tests in Traditional or Online Proctored Mode: A Comparability Study","authors":"","doi":"10.23977/langta.2023.060101","DOIUrl":"https://doi.org/10.23977/langta.2023.060101","url":null,"abstract":"","PeriodicalId":242888,"journal":{"name":"Journal of Language Testing & Assessment","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122889199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}