{"title":"Agent auto-generation system: Interact with your favorite things","authors":"S. Sawada, Taichi Sono, M. Imai","doi":"10.1109/ROMAN.2017.8172302","DOIUrl":null,"url":null,"abstract":"This paper proposes a framework for an Agent Auto-Generation System (AAGS) which gets information from sensor devices and improvises a conversational agent from arbitrary things. AAGS has agent-types to prepare a perception for an improvised agent according to the shape of the agentization targets. It also has virtual-input, which generates knowledge representation related to information extracted from the networked sensor devices. The virtual-input employs a viewpoint based on the agent-type to generate a knowledge expression, and gives it to the improvised agent. Experiments to evaluate AAGS revealed that it is necessary to consider the perception of the improvised agents based on the agent-types. In addition, the agent's viewpoint for the perception has an effect on how people recognize the improvised agent.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2017.8172302","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a framework for an Agent Auto-Generation System (AAGS) which gets information from sensor devices and improvises a conversational agent from arbitrary things. AAGS has agent-types to prepare a perception for an improvised agent according to the shape of the agentization targets. It also has virtual-input, which generates knowledge representation related to information extracted from the networked sensor devices. The virtual-input employs a viewpoint based on the agent-type to generate a knowledge expression, and gives it to the improvised agent. Experiments to evaluate AAGS revealed that it is necessary to consider the perception of the improvised agents based on the agent-types. In addition, the agent's viewpoint for the perception has an effect on how people recognize the improvised agent.