From text to motion: grounding GPT-4 in a humanoid robot "Alter3".

IF 2.9 Q2 ROBOTICS
Frontiers in Robotics and AI Pub Date : 2025-05-27 eCollection Date: 2025-01-01 DOI:10.3389/frobt.2025.1581110
Takahide Yoshida, Atsushi Masumori, Takashi Ikegami
{"title":"From text to motion: grounding GPT-4 in a humanoid robot \"Alter3\".","authors":"Takahide Yoshida, Atsushi Masumori, Takashi Ikegami","doi":"10.3389/frobt.2025.1581110","DOIUrl":null,"url":null,"abstract":"<p><p>This paper introduces Alter3, a humanoid robot that demonstrates spontaneous motion generation through the integration of GPT-4, a cutting-edge Large Language Model (LLM). This integration overcomes the challenge of applying LLMs to direct robot control, which typically struggles with the hardware-specific nuances of robotic operation. By translating linguistic descriptions of human actions into robotic movements via programming, Alter3 can autonomously perform a diverse range of actions, such as adopting a \"selfie\" pose or simulating a \"ghost.\" This approach not only shows Alter3's few-shot learning capabilities but also its adaptability to verbal feedback for pose adjustments without manual fine-tuning. This research advances the field of humanoid robotics by bridging linguistic concepts with physical embodiment and opens new avenues for exploring spontaneity in humanoid robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1581110"},"PeriodicalIF":2.9000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12149125/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2025.1581110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper introduces Alter3, a humanoid robot that demonstrates spontaneous motion generation through the integration of GPT-4, a cutting-edge Large Language Model (LLM). This integration overcomes the challenge of applying LLMs to direct robot control, which typically struggles with the hardware-specific nuances of robotic operation. By translating linguistic descriptions of human actions into robotic movements via programming, Alter3 can autonomously perform a diverse range of actions, such as adopting a "selfie" pose or simulating a "ghost." This approach not only shows Alter3's few-shot learning capabilities but also its adaptability to verbal feedback for pose adjustments without manual fine-tuning. This research advances the field of humanoid robotics by bridging linguistic concepts with physical embodiment and opens new avenues for exploring spontaneity in humanoid robots.

从文本到动作:在仿人机器人Alter3中接地GPT-4。
本文介绍了一种名为Alter3的仿人机器人,该机器人通过集成尖端的大语言模型(LLM) GPT-4来演示自发运动生成。这种集成克服了将llm应用于直接机器人控制的挑战,后者通常与机器人操作中特定硬件的细微差别作斗争。通过编程将人类行为的语言描述转换为机器人动作,Alter3可以自主执行各种各样的动作,比如采取“自拍”姿势或模拟“鬼魂”。这种方法不仅显示了Alter3的少量镜头学习能力,而且还显示了它对口头反馈的适应性,无需手动微调即可进行姿势调整。该研究通过将语言概念与物理体现相结合,推动了仿人机器人领域的发展,为探索仿人机器人的自发性开辟了新的途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.50
自引率
5.90%
发文量
355
审稿时长
14 weeks
期刊介绍: Frontiers in Robotics and AI publishes rigorously peer-reviewed research covering all theory and applications of robotics, technology, and artificial intelligence, from biomedical to space robotics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信