{"title":"Use of innovative technology in oral language assessment","authors":"Fumiyo Nakatsuhara, Vivien Berry","doi":"10.1080/0969594X.2021.2004530","DOIUrl":null,"url":null,"abstract":"The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment of English as a lingua franca (ELF) framework, and Harding and McNamara's (2018) work on ELF and its relationship to language assessment constructs, Ockey and Neiriz present ASSESSMENT IN EDUCATION: PRINCIPLES, POLICY & PRACTICE 2021, VOL. 28, NO. 4, 343–349 https://doi.org/10.1080/0969594X.2021.2004530","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessment in Education-Principles Policy & Practice","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/0969594X.2021.2004530","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 2
Abstract
The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment of English as a lingua franca (ELF) framework, and Harding and McNamara's (2018) work on ELF and its relationship to language assessment constructs, Ockey and Neiriz present ASSESSMENT IN EDUCATION: PRINCIPLES, POLICY & PRACTICE 2021, VOL. 28, NO. 4, 343–349 https://doi.org/10.1080/0969594X.2021.2004530
期刊介绍:
Recent decades have witnessed significant developments in the field of educational assessment. New approaches to the assessment of student achievement have been complemented by the increasing prominence of educational assessment as a policy issue. In particular, there has been a growth of interest in modes of assessment that promote, as well as measure, standards and quality. These have profound implications for individual learners, institutions and the educational system itself. Assessment in Education provides a focus for scholarly output in the field of assessment. The journal is explicitly international in focus and encourages contributions from a wide range of assessment systems and cultures. The journal''s intention is to explore both commonalities and differences in policy and practice.