Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent

ArXiv Pub Date : 2024-02-15 DOI:10.48550/arXiv.2402.09844
Quentin Gallou'edec, Edward Beeching, Cl'ement Romac, Emmanuel Dellandr'ea
{"title":"Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent","authors":"Quentin Gallou'edec, Edward Beeching, Cl'ement Romac, Emmanuel Dellandr'ea","doi":"10.48550/arXiv.2402.09844","DOIUrl":null,"url":null,"abstract":"The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research. The prevailing methodology in Reinforcement Learning (RL) typically limits models to a single task within a unimodal framework, a limitation that contrasts with the broader vision of a versatile, multi-domain model. In this paper, we present Jack of All Trades (JAT), a transformer-based model with a unique design optimized for handling sequential decision-making tasks and multimodal data types. The JAT model demonstrates its robust capabilities and versatility by achieving strong performance on very different RL benchmarks, along with promising results on Computer Vision (CV) and Natural Language Processing (NLP) tasks, all using a single set of weights. The JAT model marks a significant step towards more general, cross-domain AI model design, and notably, it is the first model of its kind to be fully open-sourced (see https://huggingface.co/jat-project/jat), including a pioneering general-purpose dataset.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2402.09844","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research. The prevailing methodology in Reinforcement Learning (RL) typically limits models to a single task within a unimodal framework, a limitation that contrasts with the broader vision of a versatile, multi-domain model. In this paper, we present Jack of All Trades (JAT), a transformer-based model with a unique design optimized for handling sequential decision-making tasks and multimodal data types. The JAT model demonstrates its robust capabilities and versatility by achieving strong performance on very different RL benchmarks, along with promising results on Computer Vision (CV) and Natural Language Processing (NLP) tasks, all using a single set of weights. The JAT model marks a significant step towards more general, cross-domain AI model design, and notably, it is the first model of its kind to be fully open-sourced (see https://huggingface.co/jat-project/jat), including a pioneering general-purpose dataset.
多才多艺的变压器代理商
寻找一种能在多个领域无缝运行的通用模型仍然是机器学习研究的一个关键目标。强化学习(RL)中的主流方法通常将模型限制在单模态框架内的单一任务中,这种局限性与多功能、多领域模型的广阔愿景形成了鲜明对比。在本文中,我们介绍了 Jack of All Trades (JAT),这是一种基于转换器的模型,其独特的设计针对处理顺序决策任务和多模态数据类型进行了优化。JAT 模型在不同的 RL 基准上都取得了很好的性能,在计算机视觉(CV)和自然语言处理(NLP)任务上也取得了可喜的成果,所有这些都使用了单组权重,从而展示了其强大的能力和多功能性。JAT 模型标志着向更通用的跨领域人工智能模型设计迈出了重要一步,值得注意的是,它是首个完全开源的同类模型(见 https://huggingface.co/jat-project/jat),包括一个开创性的通用数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信