B. Bolder, H. Brandl, Martin Heracles, H. Janssen, Inna Mikhailova, Jens Schmüdderich, C. Goerick
{"title":"期望驱动的自主学习与交互系统","authors":"B. Bolder, H. Brandl, Martin Heracles, H. Janssen, Inna Mikhailova, Jens Schmüdderich, C. Goerick","doi":"10.1109/ICHR.2008.4756030","DOIUrl":null,"url":null,"abstract":"We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and self-collision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous reactive layer and an expectation generation layer. Different feature channels can be classified and named with arbitrary speech labels in on-line learning sessions. The feasibility of the proposed approach is shown by interaction experiments.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":"{\"title\":\"Expectation-driven autonomous learning and interaction system\",\"authors\":\"B. Bolder, H. Brandl, Martin Heracles, H. Janssen, Inna Mikhailova, Jens Schmüdderich, C. Goerick\",\"doi\":\"10.1109/ICHR.2008.4756030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and self-collision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous reactive layer and an expectation generation layer. Different feature channels can be classified and named with arbitrary speech labels in on-line learning sessions. The feasibility of the proposed approach is shown by interaction experiments.\",\"PeriodicalId\":402020,\"journal\":{\"name\":\"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"31\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICHR.2008.4756030\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICHR.2008.4756030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Expectation-driven autonomous learning and interaction system
We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and self-collision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous reactive layer and an expectation generation layer. Different feature channels can be classified and named with arbitrary speech labels in on-line learning sessions. The feasibility of the proposed approach is shown by interaction experiments.