Hash table based feed forward neural networks: A scalable approach towards think aloud imitation

Javeria Iqbal, M. Yousaf
{"title":"Hash table based feed forward neural networks: A scalable approach towards think aloud imitation","authors":"Javeria Iqbal, M. Yousaf","doi":"10.1109/ICET.2009.5353210","DOIUrl":null,"url":null,"abstract":"In this paper, we deal with the problem of inefficient context modules of recurrent networks (RNs), which form the basis of think aloud: a strategy for imitation. Learning from observation provides a fine way for knowledge acquisition of demonstrated task. In order to learn complex tasks then simply learning action sequences, strategy of think aloud imitation learning applies recurrent network model (RNM) [1]. We propose dynamic task imitation architecture in time and storage efficient way. Inefficient recurrent nodes are replaced with updated feed forward network (FFN). Our modified architecture is based on hash table. Single hash store is used instead of multiple recurrent nodes. History for input usability is saved for experience based task learning. Performance evaluation of this approach makes success guarantee for robot training. It is best suitable approach for all applications based on recurrent neural network by replacing this inefficient network with our designed approach.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 International Conference on Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICET.2009.5353210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we deal with the problem of inefficient context modules of recurrent networks (RNs), which form the basis of think aloud: a strategy for imitation. Learning from observation provides a fine way for knowledge acquisition of demonstrated task. In order to learn complex tasks then simply learning action sequences, strategy of think aloud imitation learning applies recurrent network model (RNM) [1]. We propose dynamic task imitation architecture in time and storage efficient way. Inefficient recurrent nodes are replaced with updated feed forward network (FFN). Our modified architecture is based on hash table. Single hash store is used instead of multiple recurrent nodes. History for input usability is saved for experience based task learning. Performance evaluation of this approach makes success guarantee for robot training. It is best suitable approach for all applications based on recurrent neural network by replacing this inefficient network with our designed approach.
基于哈希表的前馈神经网络:一种可扩展的思维模仿方法
在本文中,我们处理循环网络(RNs)的低效上下文模块问题,它构成了大声思考的基础:一种模仿策略。从观察中学习为演示任务的知识获取提供了一个很好的途径。为了学习复杂的任务,再简单地学习动作序列,大声思考模仿学习策略采用循环网络模型(RNM)[1]。提出了一种实时高效、存储高效的动态任务模拟架构。将低效的循环节点替换为更新的前馈网络(FFN)。我们修改的架构是基于哈希表的。使用单个哈希存储而不是多个循环节点。为基于经验的任务学习保存输入可用性的历史记录。该方法的性能评估为机器人训练的成功提供了保证。用我们设计的方法取代了这种低效的递归神经网络,是最适合所有基于递归神经网络应用的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信