2022 IEEE International Conference on Development and Learning (ICDL)最新文献

筛选
英文 中文
Using Infant Limb Movement Data to Control Small Aerial Robots 利用婴儿肢体运动数据控制小型空中机器人
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-08-11 DOI: 10.1109/ICDL53763.2022.9962205
G. Kouvoutsakis, Elena Kokkoni, Konstantinos Karydis
{"title":"Using Infant Limb Movement Data to Control Small Aerial Robots","authors":"G. Kouvoutsakis, Elena Kokkoni, Konstantinos Karydis","doi":"10.1109/ICDL53763.2022.9962205","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962205","url":null,"abstract":"Promoting exploratory movements through contingent feedback can positively influence motor development in infancy. Our ongoing work gears toward the development of a robot-assisted contingency learning environment through the use of small aerial robots. This paper examines whether aerial robots and their associated motion controllers can be used to achieve efficient and highly-responsive robot flight for our purpose. Infant kicking kinematic data were extracted from videos and used in simulation and physical experiments with an aerial robot. The efficacy of two standard of practice controllers was assessed: a linear PID and a nonlinear geometric controller. The ability of the robot to match infant kicking trajectories was evaluated qualitatively and quantitatively via the mean squared error (to assess overall deviation from the input infant leg trajectory signals), and dynamic time warping algorithm (to quantify the signal synchrony). Results demonstrate that it is in principle possible to track infant kicking trajectories with small aerials robots, and identify areas of further development required to improve the tracking quality","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126957414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots with Different Embodiments Can Express and Influence Carefulness in Object Manipulation 具有不同实施方式的机器人可以表达和影响物体操作中的细心性
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-08-03 DOI: 10.1109/ICDL53763.2022.9962196
Linda Lastrico, Luca Garello, F. Rea, Nicoletta Noceti, F. Mastrogiovanni, A. Sciutti, A. Carfì
{"title":"Robots with Different Embodiments Can Express and Influence Carefulness in Object Manipulation","authors":"Linda Lastrico, Luca Garello, F. Rea, Nicoletta Noceti, F. Mastrogiovanni, A. Sciutti, A. Carfì","doi":"10.1109/ICDL53763.2022.9962196","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962196","url":null,"abstract":"Humans have an extraordinary ability to communicate and read the properties of objects by simply watching them being carried by someone else. This level of communicative skills and interpretation, available to humans, is essential for collaborative robots if they are to interact naturally and effectively. For example, suppose a robot is handing over a fragile object. In that case, the human who receives it should be informed of its fragility in advance, through an immediate and implicit message, i.e., by the direct modulation of the robot’s action. This work investigates the perception of object manipulations performed with a communicative intent by two robots with different embodiments (an iCub humanoid robot and a Baxter robot). We designed the robots’ movements to communicate carefulness or not during the transportation of objects. We found that not only this feature is correctly perceived by human observers, but it can elicit as well a form of motor adaptation in subsequent human object manipulations. In addition, we get an insight into which motion features may induce to manipulate an object more or less carefully.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114191818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Don’t Forget to Buy Milk: Contextually Aware Grocery Reminder Household Robot 别忘了买牛奶:情景感知杂货提醒家用机器人
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-07-19 DOI: 10.1109/ICDL53763.2022.9962208
Ali Ayub, C. Nehaniv, K. Dautenhahn
{"title":"Don’t Forget to Buy Milk: Contextually Aware Grocery Reminder Household Robot","authors":"Ali Ayub, C. Nehaniv, K. Dautenhahn","doi":"10.1109/ICDL53763.2022.9962208","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962208","url":null,"abstract":"Assistive robots operating in household environments would require items to be available in the house to perform assistive tasks. However, when these items run out, the assistive robot must remind its user to buy the missing items. In this paper, we present a computational architecture that can allow a robot to learn personalized contextual knowledge of a household through interactions with its user. The architecture can then use the learned knowledge to make predictions about missing items from the household over a long period of time. The architecture integrates state-of-the-art perceptual learning algorithms, cognitive models of memory encoding and learning, a reasoning module for predicting missing items from the household, and a graphical user interface (GUI) to interact with the user. The architecture is integrated with the Fetch mobile manipulator robot and validated in a large indoor environment with multiple contexts and objects. Our experimental results show that the robot can adapt to an environment by learning contextual knowledge through interactions with its user. The robot can also use the learned knowledge to correctly predict missing items over multiple weeks and it is robust against sensory and perceptual errors.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114978083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Brain-inspired probabilistic generative model for double articulation analysis of spoken language 语音双重发音分析的脑启发概率生成模型
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-07-06 DOI: 10.1109/ICDL53763.2022.9962216
Akira Taniguchi, Maoko Muro, Hiroshi Yamakawa, T. Taniguchi
{"title":"Brain-inspired probabilistic generative model for double articulation analysis of spoken language","authors":"Akira Taniguchi, Maoko Muro, Hiroshi Yamakawa, T. Taniguchi","doi":"10.1109/ICDL53763.2022.9962216","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962216","url":null,"abstract":"The human brain, among its several functions, analyzes the double articulation structure in spoken language, i.e., double articulation analysis (DAA). A hierarchical structure in which words are connected to form a sentence and words are composed of phonemes or syllables is called a double articulation structure. Where and how DAA is performed in the human brain has not been established, although some insights have been obtained. In addition, existing computational models based on a probabilistic generative model (PGM) do not incorporate neuroscientific findings, and their consistency with the brain has not been previously discussed. This study compared, mapped, and integrated these existing computational models with neuroscientific findings to bridge this gap, and the findings are relevant for future applications and further research. This study proposes a PGM for a DAA hypothesis that can be realized in the brain based on the outcomes of several neuroscientific surveys. The study involved (i) investigation and organization of anatomical structures related to spoken language processing, and (ii) design of a PGM that matches the anatomy and functions of the region of interest. Therefore, this study provides novel insights that will be foundational to further exploring DAA in the brain.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127340406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments. 快速学习:在开放世界环境中处理新奇事物的学习恢复框架
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-06-24 DOI: 10.1109/ICDL53763.2022.9962230
Shivam Goel, Yash Shukla, Vasanth Sarathy, matthias. scheutz, J. Sinapov
{"title":"RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments.","authors":"Shivam Goel, Yash Shukla, Vasanth Sarathy, matthias. scheutz, J. Sinapov","doi":"10.1109/ICDL53763.2022.9962230","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962230","url":null,"abstract":"We propose RAPid-Learn (Learning to Recover and Plan Again), a hybrid planning and learning method, to tackle the problem of adapting to sudden and unexpected changes in an agent’s environment (i.e., novelties). RAPid-Learn is designed to formulate and solve modifications to a task’s Markov Decision Process (MDPs) on-the-fly. It is capable of exploiting the domain knowledge to learn action executors which can be further used to resolve execution impasses, leading to a successful plan execution. We demonstrate its efficacy by introducing a wide variety of novelties in a gridworld environment inspired by Minecraft, and compare our algorithm with transfer learning baselines from the literature. Our method is (1) effective even in the presence of multiple novelties, (2) more sample efficient than transfer learning RL baselines, and (3) robust to incomplete model information, as opposed to pure symbolic planning approaches.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"34 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120858665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Developing hierarchical anticipations via neural network-based event segmentation 通过基于神经网络的事件分割开发分层预期
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-06-04 DOI: 10.1109/ICDL53763.2022.9962224
Christian Gumbsch, M. Adam, B. Elsner, G. Martius, Martin Volker Butz
{"title":"Developing hierarchical anticipations via neural network-based event segmentation","authors":"Christian Gumbsch, M. Adam, B. Elsner, G. Martius, Martin Volker Butz","doi":"10.1109/ICDL53763.2022.9962224","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962224","url":null,"abstract":"Humans can make predictions on various time scales and hierarchical levels. Thereby, the learning of event encodings seems to play a crucial role. In this work we model the development of hierarchical predictions via autonomously learned latent event codes. We present a hierarchical recurrent neural network architecture, whose inductive learning biases foster the development of sparsely changing latent state that compress sensorimotor sequences. A higher level network learns to predict the situations in which the latent states tend to change. Using a simulated robotic manipulator, we demonstrate that the system (i) learns latent states that accurately reflect the event structure of the data, (ii) develops meaningful temporal abstract predictions on the higher level, and (iii) generates goal-anticipatory behavior similar to gaze behavior found in eye-tracking studies with infants. The architecture offers a step towards the autonomous learning of compressed hierarchical encodings of gathered experiences and the exploitation of these encodings to generate adaptive behavior.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"425 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127604103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Binding Dancers Into Attractors 把舞者变成吸引者
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-06-01 DOI: 10.1109/ICDL53763.2022.9962218
Franziska Kaltenberger, S. Otte, Martin Volker Butz
{"title":"Binding Dancers Into Attractors","authors":"Franziska Kaltenberger, S. Otte, Martin Volker Butz","doi":"10.1109/ICDL53763.2022.9962218","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962218","url":null,"abstract":"To effectively perceive and process observations in our environment, feature binding and perspective taking are crucial cognitive abilities. Feature binding combines observed features into one entity, called a Gestalt. Perspective taking transfers the percept into a canonical, observer-centered frame of reference. Here we propose a recurrent neural network model that solves both challenges. We first train an LSTM to predict 3D motion dynamics from a canonical perspective. We then present similar motion dynamics with novel viewpoints and feature arrangements. Retrospective inference enables the deduction of the canonical perspective. Combined with a robust mutual-exclusive softmax selection scheme, random feature arrangements are reordered and precisely bound into known Gestalt percepts. To corroborate evidence for the architecture’s cognitive validity, we examine its behavior on the silhouette illusion, which elicits two competitive Gestalt interpretations of a rotating dancer. Our system flexibly binds the information of the rotating Figure into the alternative attractors resolving the illusion’s ambiguity and imagining the respective depth interpretation and the corresponding direction of rotation. We finally discuss the potential universality of the proposed mechanisms.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129245463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Symbol Emergence as Inter-personal Categorization with Head-to-head Latent Word 符号涌现作为具有头对头潜词的人际分类
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-05-24 DOI: 10.1109/ICDL53763.2022.9962227
Kazuma Furukawa, Akira Taniguchi, Y. Hagiwara, T. Taniguchi
{"title":"Symbol Emergence as Inter-personal Categorization with Head-to-head Latent Word","authors":"Kazuma Furukawa, Akira Taniguchi, Y. Hagiwara, T. Taniguchi","doi":"10.1109/ICDL53763.2022.9962227","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962227","url":null,"abstract":"In this study, we propose a head-to-head type (H2H-type) inter-personal multimodal Dirichlet mixture (Inter-MDM) by modifying the original Inter-MDM, which is a probabilistic generative model that represents the symbol emergence between two agents as multiagent multimodal categorization. A Metropolis-Hastings method-based naming game based on the Inter-MDM enables two agents to collaboratively perform multimodal categorization and share signs with a solid mathematical foundation of convergence. However, the conventional Inter-MDM presumes a tail-to-tail connection across a latent word variable, causing inflexibility of the further extension of Inter-MDM for modeling a more complex symbol emergence. Therefore, we propose herein a head-to-head type (H2H-type) Inter-MDM that treats a latent word variable as a child node of an internal variable of each agent in the same way as many prior studies of multimodal categorization. On the basis of the H2H-type Inter-MDM, we propose a naming game in the same way as the conventional Inter-MDM. The experimental results show that the H2H-type Inter-MDM yields almost the same performance as the conventional Inter-MDM from the viewpoint of multimodal categorization and sign sharing.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126778169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Morphological Wobbling Can Help Robots Learn 形态摇摆可以帮助机器人学习
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-05-05 DOI: 10.1109/ICDL53763.2022.9962194
Fabien C. Y. Benureau, J. Tani
{"title":"Morphological Wobbling Can Help Robots Learn","authors":"Fabien C. Y. Benureau, J. Tani","doi":"10.1109/ICDL53763.2022.9962194","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962194","url":null,"abstract":"We propose to make the physical characteristics of a robot oscillate while it learns to improve its behavioral performance. We consider quantities such as mass, actuator strength, and size that are usually fixed in a robot, and show that when those quantities oscillate at the beginning of the learning process on a simulated 2D soft robot, the performance on a locomotion task can be significantly improved. We investigate the dynamics of the phenomenon and conclude that in our case, surprisingly, a high-frequency oscillation with a large amplitude for a large portion of the learning duration leads to the highest performance benefits. Furthermore, we show that morphological wobbling significantly increases exploration of the search space.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128544773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Grounding Hindsight Instructions in Multi-Goal Reinforcement Learning for Robotics 机器人多目标强化学习的后见之明
2022 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2022-04-08 DOI: 10.1109/ICDL53763.2022.9962207
Frank Röder, Manfred Eppe, S. Wermter
{"title":"Grounding Hindsight Instructions in Multi-Goal Reinforcement Learning for Robotics","authors":"Frank Röder, Manfred Eppe, S. Wermter","doi":"10.1109/ICDL53763.2022.9962207","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962207","url":null,"abstract":"This paper focuses on robotic reinforcement learning with sparse rewards for natural language goal representations. An open problem is the sample-inefficiency that stems from the compositionality of natural language, and from the grounding of language in sensory data and actions. We address these issues with three contributions. We first present a mechanism for hindsight instruction replay utilizing expert feedback. Second, we propose a seq2seq model to generate linguistic hindsight instructions. Finally, we present a novel class of language-focused learning tasks. We show that hindsight instructions improve the learning performance, as expected. In addition, we also provide an unexpected result: We show that the learning performance of our agent can be improved by one third if, in a sense, the agent learns to talk to itself in a self-supervised manner. We achieve this by learning to generate linguistic instructions that would have been appropriate as a natural language goal for an originally unintended behavior. Our results indicate that the performance gain increases with the task-complexity.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126638458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信