EMT: MSR机器翻译的端到端模型培训

Vishal Chowdhary, Scott Greenwood
{"title":"EMT: MSR机器翻译的端到端模型培训","authors":"Vishal Chowdhary, Scott Greenwood","doi":"10.1145/3076246.3076247","DOIUrl":null,"url":null,"abstract":"Machine translation, at its core, is a Machine Learning (ML) problem that involves learning language translation by looking at large amounts of parallel data i.e. translations of the same dataset in two or more languages. If we have parallel data between languages L1 and L2, we can build translation systems between these two languages. When training a complete system, we train several different models, each containing a different type of information about either one of the languages or the relationship between the two. We end up training thousands of models to support hundreds of languages. In this article, we explain our end to end architecture of automatically training and deploying models at scale. The goal of this project is to create a fully automated system responsible for gathering new data, training systems, and shipping them to production with little or no guidance from an administrator. By using the ever changing and always expanding contents of the web, we have a system that can quietly improve our existing systems over time. In this article, we detail the architecture and talk about the various problems and the solutions we arrived upon. Finally, we demonstrate experiments and data showing the impact of our work. Specifically, this system has enabled us to ship much more frequently and eliminate human errors which happen when running repetitive tasks. The principles of this pipeline can be applied to any ML training and deployment system.","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"EMT: End To End Model Training for MSR Machine Translation\",\"authors\":\"Vishal Chowdhary, Scott Greenwood\",\"doi\":\"10.1145/3076246.3076247\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine translation, at its core, is a Machine Learning (ML) problem that involves learning language translation by looking at large amounts of parallel data i.e. translations of the same dataset in two or more languages. If we have parallel data between languages L1 and L2, we can build translation systems between these two languages. When training a complete system, we train several different models, each containing a different type of information about either one of the languages or the relationship between the two. We end up training thousands of models to support hundreds of languages. In this article, we explain our end to end architecture of automatically training and deploying models at scale. The goal of this project is to create a fully automated system responsible for gathering new data, training systems, and shipping them to production with little or no guidance from an administrator. By using the ever changing and always expanding contents of the web, we have a system that can quietly improve our existing systems over time. In this article, we detail the architecture and talk about the various problems and the solutions we arrived upon. Finally, we demonstrate experiments and data showing the impact of our work. Specifically, this system has enabled us to ship much more frequently and eliminate human errors which happen when running repetitive tasks. The principles of this pipeline can be applied to any ML training and deployment system.\",\"PeriodicalId\":118931,\"journal\":{\"name\":\"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3076246.3076247\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3076246.3076247","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

机器翻译的核心是一个机器学习(ML)问题,它涉及通过查看大量并行数据来学习语言翻译,即用两种或多种语言翻译相同的数据集。如果我们有L1和L2语言之间的并行数据,我们可以在这两种语言之间建立翻译系统。当训练一个完整的系统时,我们训练几个不同的模型,每个模型包含关于一种语言或两者之间关系的不同类型的信息。我们最终训练了数千个模型来支持数百种语言。在本文中,我们将解释自动训练和大规模部署模型的端到端架构。这个项目的目标是创建一个完全自动化的系统,负责收集新数据、培训系统,并在很少或没有管理员指导的情况下将它们交付到生产环境。通过使用不断变化和不断扩展的网络内容,我们有了一个系统,可以随着时间的推移悄悄地改进我们现有的系统。在本文中,我们将详细介绍该体系结构,并讨论我们遇到的各种问题和解决方案。最后,我们展示了实验和数据,显示了我们工作的影响。具体来说,该系统使我们能够更频繁地发布产品,并消除运行重复性任务时发生的人为错误。该管道的原理可以应用于任何机器学习培训和部署系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
EMT: End To End Model Training for MSR Machine Translation
Machine translation, at its core, is a Machine Learning (ML) problem that involves learning language translation by looking at large amounts of parallel data i.e. translations of the same dataset in two or more languages. If we have parallel data between languages L1 and L2, we can build translation systems between these two languages. When training a complete system, we train several different models, each containing a different type of information about either one of the languages or the relationship between the two. We end up training thousands of models to support hundreds of languages. In this article, we explain our end to end architecture of automatically training and deploying models at scale. The goal of this project is to create a fully automated system responsible for gathering new data, training systems, and shipping them to production with little or no guidance from an administrator. By using the ever changing and always expanding contents of the web, we have a system that can quietly improve our existing systems over time. In this article, we detail the architecture and talk about the various problems and the solutions we arrived upon. Finally, we demonstrate experiments and data showing the impact of our work. Specifically, this system has enabled us to ship much more frequently and eliminate human errors which happen when running repetitive tasks. The principles of this pipeline can be applied to any ML training and deployment system.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信