Isao Goto, Bin Lu, Ka-Po Chow, E. Sumita, Benjamin Ka-Yin T'sou, M. Utiyama, K. Yasuda
{"title":"Database of Human Evaluations of Machine Translation Systems for Patent Translation","authors":"Isao Goto, Bin Lu, Ka-Po Chow, E. Sumita, Benjamin Ka-Yin T'sou, M. Utiyama, K. Yasuda","doi":"10.5715/JNLP.20.27","DOIUrl":null,"url":null,"abstract":"This paper discusses a database of human evaluations of patent machine translation, from Chinese to English, Japanese to English, and English to Japanese. The evaluations were conducted for the NTCIR-9 Patent Machine Translation Task (PatentMT). Different types of systems, such as research systems and commercial systems, and rule-based systems and statistical machine translation systems were evaluated. Since human evaluation results are important when investigating automatic evaluation of translation quality, the database of the evaluation results is valuable. From the NTCIR project, resources including the human evaluation database, translation results, and test/reference data are available for research purposes.","PeriodicalId":16243,"journal":{"name":"Journal of Information Processing","volume":"20 1","pages":"27-57"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5715/JNLP.20.27","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 3
Abstract
This paper discusses a database of human evaluations of patent machine translation, from Chinese to English, Japanese to English, and English to Japanese. The evaluations were conducted for the NTCIR-9 Patent Machine Translation Task (PatentMT). Different types of systems, such as research systems and commercial systems, and rule-based systems and statistical machine translation systems were evaluated. Since human evaluation results are important when investigating automatic evaluation of translation quality, the database of the evaluation results is valuable. From the NTCIR project, resources including the human evaluation database, translation results, and test/reference data are available for research purposes.