2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)最新文献

筛选
英文 中文
From social interaction to ethical AI: a developmental roadmap 从社交互动到道德人工智能:发展路线图
Matthias Rolf, Nigel Crook, Jochen J. Steil
{"title":"From social interaction to ethical AI: a developmental roadmap","authors":"Matthias Rolf, Nigel Crook, Jochen J. Steil","doi":"10.1109/DEVLRN.2018.8761023","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761023","url":null,"abstract":"AI and robot ethics have recently gained a lot of attention because adaptive machines are increasingly involved in ethically sensitive scenarios and cause incidents of public outcry. Much of the debate has been focused on achieving highest moral standards in handling ethical dilemmas on which not even humans can agree, which indicates that the wrong questions are being asked. We suggest to address this ethics debate strictly through the lens of what behavior seems socially acceptable, rather than idealistically ethical. Learning such behavior puts the debate into the very heart of developmental robotics. This paper poses a roadmap of computational and experimental questions to address the development of socially acceptable machines. We emphasize the need for social reward mechanisms and learning architectures that integrate these while reaching beyond limitations of plain reinforcement-learning agents. We suggest to use the metaphor of “needs” to bridge rewards and higher level abstractions such as goals for both communication and action generation in a social context. We then suggest a series of experimental questions and possible platforms and paradigms to guide future research in the area.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134588923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Object detection and localization with Artificial Foveal Visual Attention 人工中央凹视觉注意的目标检测与定位
Cristina Melício, R. Figueiredo, A. F. Almeida, A. Bernardino, J. Santos-Victor
{"title":"Object detection and localization with Artificial Foveal Visual Attention","authors":"Cristina Melício, R. Figueiredo, A. F. Almeida, A. Bernardino, J. Santos-Victor","doi":"10.1109/DEVLRN.2018.8761032","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761032","url":null,"abstract":"In the last decades, in order to make the processing of a scene more efficient, biologically inspired approaches have been proposed. Visual attention models are being studied and actively developed in order to reduce the complexity and computational time of the existing methods. We propose a biologically inspired model that combines a single pre-trained CNN architecture with an artificial foveal visual system that performs simultaneously the classification and localization of objects in images. This model is based on the fact that only a small part of the image is processed with high resolution at each time so we load a foveated image in the network and successively employ feed-forward passes to determine the class labels and then via backward propagation determine the object possible locations according to each semantic label. By directing the attention to the center of the proposed location we mimic the human saccadic eye movements. In the results obtained we used the ILSVRC 2012 validation data set in a GoogLeNet CNN. We demonstrate that for non-centered objects the gain of the classification performance between iterations is significant showing that when mimicking the human visual behaviour of foveation, saccades are needed to integrate the information at each time.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"38 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115736596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Modular Continuous Learning Framework 模块化持续学习框架
Paresh Dhakan, K. Merrick, I. Rañó, N. Siddique
{"title":"Modular Continuous Learning Framework","authors":"Paresh Dhakan, K. Merrick, I. Rañó, N. Siddique","doi":"10.1109/DEVLRN.2018.8761008","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761008","url":null,"abstract":"Although multiple learning techniques exist to endow robots with different skills, open-ended learning is still an outstanding research problem in robotics. Open-ended learning would provide learning autonomy to robots such that they would not require human intervention to learn. This paper proposes a continuous learning framework consisting of a goal discovery module, a goal management module, and a learning module that can be used to implement open-ended learning in robotics. The framework is highly flexible, as it allows any clustering algorithm to be used for goal discovery and any reinforcement learning algorithm for goal learning. The experimental analysis conducted on a mobile robot supports the validity of the framework. Results show how the robot, when placed in a new environment, autonomously generates and learns new goals, thus forming a continuous learning framework capable of autonomously representing and learning skills in an open-ended way.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127536086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Proprioceptive Feedback Plays a Key Role in Self-Other Differentiation 本体感觉反馈在自我-他人分化中起关键作用
Yihan Zhang, Y. Nagai
{"title":"Proprioceptive Feedback Plays a Key Role in Self-Other Differentiation","authors":"Yihan Zhang, Y. Nagai","doi":"10.1109/DEVLRN.2018.8761042","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761042","url":null,"abstract":"How do humans know whether the hand in front of their sight belongs to themselves? The question concerning the development of self-other differentiation remains one of the fundamental problems before we can truly understand and simulate the cognitive process of human social behaviors. Opposing to the traditional associative sequence learning models, our proposed model adds a closed loop of the proprioceptive perception of an agent, which conceptually simulates the imaginary body scheme. During a learning phase, this simulated body representation is corrected by the feedback of the actual sensation of the agent. Therefore, after learning, the agent becomes to be able to visually distinguish self-produced actions from others' even without proprioceptive information. This paper presents how the utilization of predicted proprioceptive feedback enables the agent to better differentiate the self from others.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127758393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning 基于视觉相似性迁移学习的黑箱发展贝叶斯优化
Maxime Petit, Amaury Depierre, Xiaofang Wang, E. Dellandréa, Liming Chen
{"title":"Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning","authors":"Maxime Petit, Amaury Depierre, Xiaofang Wang, E. Dellandréa, Liming Chen","doi":"10.1109/DEVLRN.2018.8761037","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761037","url":null,"abstract":"We present a developmental framework based on a long-term memory and reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This architecture allows a robot to optimize autonomously hyper-parameters that need to be tuned from any action and/or vision module, treated as a black-box. The learning can take advantage of past experiences (stored in the episodic and procedural memories) in order to warm-start the exploration using a set of hyper-parameters previously optimized from objects similar to the new unknown one (stored in a semantic memory). As example, the system has been used to optimized 9 continuous hyper-parameters of a professional software (Kamido) both in simulation and with a real robot (industrial robotic arm Fanuc) with a total of 13 different objects. The robot is able to find a good object-specific optimization in 68 (simulation) or 40 (real) trials. In simulation, we demonstrate the benefit of the transfer learning based on visual similarity, as opposed to an amnesic learning (i.e. learning from scratch all the time). Moreover, with the real robot, we show that the method consistently outperforms the manual optimization from an expert with less than 2 hours of training time to achieve more than 88% of success.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116269607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Understanding the cognitive mechanisms underlying autistic behavior: a recurrent neural network study 理解自闭症行为背后的认知机制:一项循环神经网络研究
A. Philippsen, Y. Nagai
{"title":"Understanding the cognitive mechanisms underlying autistic behavior: a recurrent neural network study","authors":"A. Philippsen, Y. Nagai","doi":"10.1109/DEVLRN.2018.8761038","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761038","url":null,"abstract":"People with autism spectrum disorder are suggested to exhibit atypical perception and differences in cognitive processing. In behavioral studies, however, such differences are often difficult to verify. Apparently, differences in cognitive processing do not always cause an impairment of behavior. To investigate how such a mismatch between cognitive and behavioral level could be explained, we model and evaluate the process of learning to imitate using recurrent neural networks. We systematically adjust learning parameters of the network which are linked to the precision of learning, a factor that might differ between individuals with autism and typically developed individuals. We evaluate the trained networks in terms of task performance (be-havioral level) as well as in terms of the structure of the internal representation that emerges during learning (cognitive level). Our findings demonstrate that comparable behavioral network output can be caused by different internal network representations. A less well structured internal representation does not necessarily result in a decline in performance, but can also be associated with good imitation performance. Additionally, we find evidence that well structured internal representations in our setting emerge with an appropriate integration of top-down predictions and bottom-up information processing, a finding which integrates well with theories from developmental psychology.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123573061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Dynamic Motion Generation by Flexible-Joint Robot based on Deep Learning using Images 基于图像深度学习的柔性关节机器人动态运动生成
Yuheng Wu, K. Takahashi, H. Yamada, Kitae Kim, Shingo Murata, S. Sugano, T. Ogata
{"title":"Dynamic Motion Generation by Flexible-Joint Robot based on Deep Learning using Images","authors":"Yuheng Wu, K. Takahashi, H. Yamada, Kitae Kim, Shingo Murata, S. Sugano, T. Ogata","doi":"10.1109/DEVLRN.2018.8761020","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761020","url":null,"abstract":"Robots with flexible joints have recently been attracting attention from researchers because such robots can passively adapt to environmental changes and realize dynamic motion that uses inertia. In previous research, body-model acquisition using deep learning was proposed and dynamic motion learning was achieved. However, using the end-effector position as a visual feedback signal to train a robot limits what the robot can know to only the relation between the task and itself, instead of the relation between the environment and itself. In this research, we propose to use images as a feedback signal so that the robot can have a sense of the overall situation within the task environment. This motion learning is performed via deep learning using raw image data. In an experiment, we let a robot perform task motions once to acquire motor and image data. Then, we used a convolutional auto-encoder to extract image features from raw image data. The extracted image features were used in combination with motor data to train a recurrent neural network. As a result, motion learning through deep learning from image data allowed the robot to acquire environmental information and conduct tasks that require consideration of environmental changes, making use of its advantage of passive adaptation.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121158269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Intrinsically Motivated Agent Behavior in a Swarm 群体中的内在激励Agent行为
Md Mohiuddin Khan, Kathryn E. Kasmarik, M. Barlow
{"title":"Intrinsically Motivated Agent Behavior in a Swarm","authors":"Md Mohiuddin Khan, Kathryn E. Kasmarik, M. Barlow","doi":"10.1109/DEVLRN.2018.8761030","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761030","url":null,"abstract":"Intrinsically motivated artificial agents are capable of open-ended exploration and cumulative learning. Swarm Intelligence is the intelligent behavior demonstrated by a group of simple agents that can solve complex problems as a group. There is relatively little work examining intrinsically motivated agents in a swarm setting. It includes a lack of metrics to measure the effect of intrinsic motivation in a swarm. This paper presents a model for a flock of agents capable of novelty detection and taking action to maximize immediate novelty. The intrinsically motivated behavior of these agents is examined in a simulated gallery environment. We also introduce behavior metrics to quantify the motivated behavior. Our results demonstrate the effectiveness of these metrics to determine the effects of the motivation mechanism as well as the exploratory behavior induced by it.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125361340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Novel Pneumatic Artificial Muscle -driven Robot for Multi-joint Progressive Rehabilitation 一种新型气动人工肌肉驱动多关节渐进式康复机器人
Xingxing Guo, Quan Liu, Jie Zuo, W. Meng, Qingsong Ai, Zude Zhou, Wenjun Xu
{"title":"A Novel Pneumatic Artificial Muscle -driven Robot for Multi-joint Progressive Rehabilitation","authors":"Xingxing Guo, Quan Liu, Jie Zuo, W. Meng, Qingsong Ai, Zude Zhou, Wenjun Xu","doi":"10.1109/DEVLRN.2018.8761014","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761014","url":null,"abstract":"Due to the bio-mechanical characteristics and inherent compliance, pneumatic artificial muscles have been widely applied in rehabilitation robotic field. However, the most existing multi-joint rehabilitation robots have the disadvantages of bulky facilities, low utilization rate and high cost; while some rehabilitation robots with simple mechanism are only suitable for a specific joint rehabilitation. This paper presents a single degree of freedom rehabilitation robot with progressive adjustation ability, which can provide suitable assistance for different patient's injury site. By introducing the joint motion radius element, the robot's mechanical parameters, fixed position, drive unit's overhanging state can be adjusted to provide the required range of motion and assistance torque to adapt to each recovery period during the whole rehabilitation process. After the kinematics and dynamics model of the joint mechanism is established, a modified sliding mode control method based on RBF neural network is utilized to compensate the system disturbance and guarantee the robust stability of the control. The experimental results show that the adopted algorithm achieved better control performance than the traditional sliding mode control method, which is suitable for the rehabilitation training of patients during the entire progressive rehabilitation periods.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121729125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual Learning for Reaching and Body-Schema with Gain-Field Networks 基于增益场网络的伸展和身体图式视觉学习
Julien Abrossimoff, Alexandre Pitti, P. Gaussier
{"title":"Visual Learning for Reaching and Body-Schema with Gain-Field Networks","authors":"Julien Abrossimoff, Alexandre Pitti, P. Gaussier","doi":"10.1109/DEVLRN.2018.8761041","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761041","url":null,"abstract":"Perceiving our own body posture improves the way we move dynamically and reversely, motion coordination serves to learn better the position of our own body. Following this idea, we present a neural architecture toward reaching movements and body self-perception from a developmental perspective. Our framework is based on the neurobiological mechanism known as gain modulation in parietal neurons that is found to integrate the visual, motor and proprioceptive information through product-like processes. These multiplicative networks have interesting properties for learning nonlinear transformations such as the head-centered mapping in reaching tasks or the hand-centered mapping for a body-centered representation. In a simulation of a three-link arm, we perform experiments of nearby and far reach targets exploiting one or the other strategy. The later combination of the two networks generates autonomous control toward the target by processing the body-centered spatial information and the preferred visual direction for the desired motor commands.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123694547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信