Feature Construction for Controlling Swarms by Visual Demonstration

K. K. Budhraja, J. Winder, T. Oates
{"title":"Feature Construction for Controlling Swarms by Visual Demonstration","authors":"K. K. Budhraja, J. Winder, T. Oates","doi":"10.1145/3084541","DOIUrl":null,"url":null,"abstract":"Agent-based modeling is a paradigm of modeling dynamic systems of interacting agents that are individually governed by specified behavioral rules. Training a model of such agents to produce an emergent behavior by specification of the emergent (as opposed to agent) behavior is easier from a demonstration perspective. While many approaches involve manual behavior specification via code or reliance on a defined taxonomy of possible behaviors, the meta-modeling framework in Miner [2010] generates mapping functions between agent-level parameters and swarm-level parameters, which are re-usable once generated. This work builds on that framework by integrating demonstration by image or video. The demonstrator specifies spatial motion of the agents over time and retrieves agent-level parameters required to execute that motion. The framework, at its core, uses computationally cheap image-processing algorithms. Our work is tested with a combination of primitive visual feature extraction methods (contour area and shape) and features generated using a pre-trained deep neural network in different stages of image featurization. The framework is also evaluated for its potential using complex visual features for all image featurization stages. Experimental results show significant coherence between demonstrated behavior and predicted behavior based on estimated agent-level parameters specific to the spatial arrangement of agents.","PeriodicalId":377078,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems (TAAS)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Autonomous and Adaptive Systems (TAAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3084541","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Agent-based modeling is a paradigm of modeling dynamic systems of interacting agents that are individually governed by specified behavioral rules. Training a model of such agents to produce an emergent behavior by specification of the emergent (as opposed to agent) behavior is easier from a demonstration perspective. While many approaches involve manual behavior specification via code or reliance on a defined taxonomy of possible behaviors, the meta-modeling framework in Miner [2010] generates mapping functions between agent-level parameters and swarm-level parameters, which are re-usable once generated. This work builds on that framework by integrating demonstration by image or video. The demonstrator specifies spatial motion of the agents over time and retrieves agent-level parameters required to execute that motion. The framework, at its core, uses computationally cheap image-processing algorithms. Our work is tested with a combination of primitive visual feature extraction methods (contour area and shape) and features generated using a pre-trained deep neural network in different stages of image featurization. The framework is also evaluated for its potential using complex visual features for all image featurization stages. Experimental results show significant coherence between demonstrated behavior and predicted behavior based on estimated agent-level parameters specific to the spatial arrangement of agents.
基于可视化演示的蜂群控制特征构建
基于代理的建模是一种建模由相互作用的代理组成的动态系统的范例,这些代理分别受指定的行为规则控制。从演示的角度来看,通过规范突发(与代理相反)行为来训练此类代理的模型以产生突发行为更容易。虽然许多方法涉及通过代码手动规范行为或依赖于已定义的可能行为分类,但Miner[2010]中的元建模框架生成了代理级参数和群体级参数之间的映射函数,这些函数一旦生成就可以重用。这项工作建立在这个框架上,通过整合图像或视频演示。演示器指定代理随时间的空间运动,并检索执行该运动所需的代理级参数。该框架的核心是使用计算成本低廉的图像处理算法。结合原始视觉特征提取方法(轮廓面积和形状)和使用预训练的深度神经网络在图像特征化的不同阶段生成的特征,对我们的工作进行了测试。该框架还评估了其在所有图像特征化阶段使用复杂视觉特征的潜力。实验结果表明,基于特定于智能体空间排列的估计智能体级参数,演示行为和预测行为之间存在显著的一致性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信