{"title":"The challenge of evaluating open interpreter training resources: case study of ORCIT","authors":"Svetlana Carsten, D. Ciobanu, Dalia Mankauskienė","doi":"10.1080/1750399X.2020.1867407","DOIUrl":null,"url":null,"abstract":"ABSTRACT Measuring the quality of information and communication technology (ICT) tools and their impact on learning outcomes is not an easy task. Carol Chapelle, the leading authority on the evaluation of computer-assisted language learning (CALL) tools, wrote in 2008: ‘Evaluation of innovation is perhaps the most significant challenge teachers and curriculum developers face when attempting to introduce innovation into language education’. This applies to innovative tools in interpreter training. Using the example of the set of interpreter training online resources, developed by the European ORCIT (Online Resources for Conference Interpreter Training) project, this paper discusses the popularity and perceived usefulness of such resources as measured by the combined analysis of online evaluation questionnaire responses and resource access tracking data. Qualitative, survey-based, evaluation or quantitative Google Analytics data can present a fairly accurate popularity rating of a learning tool and are widely used to test the appeal of technology among learners. Yet these two methods do not reflect the objective effectiveness of online resources on learners’ progress. Therefore, a set of relevant parameters of the Holistic TEL Evaluation Framework is also discussed together with a methodological framework for the evaluation of ORCIT’s impact on learner gain.","PeriodicalId":45693,"journal":{"name":"Interpreter and Translator Trainer","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/1750399X.2020.1867407","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interpreter and Translator Trainer","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/1750399X.2020.1867407","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 1
Abstract
ABSTRACT Measuring the quality of information and communication technology (ICT) tools and their impact on learning outcomes is not an easy task. Carol Chapelle, the leading authority on the evaluation of computer-assisted language learning (CALL) tools, wrote in 2008: ‘Evaluation of innovation is perhaps the most significant challenge teachers and curriculum developers face when attempting to introduce innovation into language education’. This applies to innovative tools in interpreter training. Using the example of the set of interpreter training online resources, developed by the European ORCIT (Online Resources for Conference Interpreter Training) project, this paper discusses the popularity and perceived usefulness of such resources as measured by the combined analysis of online evaluation questionnaire responses and resource access tracking data. Qualitative, survey-based, evaluation or quantitative Google Analytics data can present a fairly accurate popularity rating of a learning tool and are widely used to test the appeal of technology among learners. Yet these two methods do not reflect the objective effectiveness of online resources on learners’ progress. Therefore, a set of relevant parameters of the Holistic TEL Evaluation Framework is also discussed together with a methodological framework for the evaluation of ORCIT’s impact on learner gain.