SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence最新文献

筛选
英文 中文
A neurobehavioural framework for autonomous animation of virtual human faces 虚拟人脸自主动画的神经行为框架
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence Pub Date : 2014-11-24 DOI: 10.1145/2668956.2668960
Mark Sagar, D. Bullivant, Paul Robertson, Oleg Efimov, K. Jawed, R. Kalarot, Tim Wu
{"title":"A neurobehavioural framework for autonomous animation of virtual human faces","authors":"Mark Sagar, D. Bullivant, Paul Robertson, Oleg Efimov, K. Jawed, R. Kalarot, Tim Wu","doi":"10.1145/2668956.2668960","DOIUrl":"https://doi.org/10.1145/2668956.2668960","url":null,"abstract":"We describe a neurobehavioural modeling and visual computing framework for the integration of realistic interactive computer graphics with neural systems modelling, allowing real-time autonomous facial animation and interactive visualization of the underlying neural network models. The system has been designed to integrate and interconnect a wide range of computational neuroscience models to construct embodied interactive psychobiological models of behaviour. An example application of the framework combines models of the facial motor system, physiologically based emotional systems, and basic neural systems involved in early interactive behaviour and learning and embodies them in a virtual infant rendered with realistic computer graphics. The model reacts in real time to visual and auditory input and its own evolving internal processes as a dynamic system. The live state of the model which generates the resulting facial behaviour can be visualized through graphs and schematics or by exploring the activity mapped to the underlying neuroanatomy.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132026549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
On designing migrating agents: from autonomous virtual agents to intelligent robotic systems 论迁移代理的设计:从自主虚拟代理到智能机器人系统
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence Pub Date : 2014-11-24 DOI: 10.1145/2668956.2668963
Kaveh Hassani, Won-sook Lee
{"title":"On designing migrating agents: from autonomous virtual agents to intelligent robotic systems","authors":"Kaveh Hassani, Won-sook Lee","doi":"10.1145/2668956.2668963","DOIUrl":"https://doi.org/10.1145/2668956.2668963","url":null,"abstract":"In the realm of multi-agent systems, migration refers to the ability of an agent to transfer itself from one embodiment such as a graphical avatar into different embodiments such as a robotic android. Embodied agents usually function in a dynamic, uncertain, and uncontrolled environment, and exploiting them is a chaotic and error-prone task which demands high-level behavioral controllers to be able to adapt to failure at lower levels of the system. The conditions in which space robotic systems such as spacecraft and rovers operate, inspire by necessity, the development of robust and adaptive control software. In this paper, we propose a generic architecture for migrating and autonomous agents inspired by onboard autonomy which enables the developers to tailor the agent's embodiment by defining a set of feasible actions and perceptions associated with the new body. Evaluation results suggest that the architecture supports migration by performing consistent deliberative and reactive behaviors.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131831501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Coexistent space: toward seamless integration of real, virtual, and remote worlds for 4D+ interpersonal interaction and collaboration 共存空间:实现真实世界、虚拟世界、远程世界的无缝融合,实现4D+人际互动与协作
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence Pub Date : 2014-11-24 DOI: 10.1145/2668956.2668957
Bum-Jae You, J. R. Kwon, S. Nam, Jung-Jea Lee, Kwang-Kyu Lee, Ki-Won Yeom
{"title":"Coexistent space: toward seamless integration of real, virtual, and remote worlds for 4D+ interpersonal interaction and collaboration","authors":"Bum-Jae You, J. R. Kwon, S. Nam, Jung-Jea Lee, Kwang-Kyu Lee, Ki-Won Yeom","doi":"10.1145/2668956.2668957","DOIUrl":"https://doi.org/10.1145/2668956.2668957","url":null,"abstract":"Three worlds are integral to our daily life: the real world, virtual world, and remote world. In the paper, there is proposed coexistent space where networked users can communicate, interact, and collaborate together by exchanging 4D+ sensation, human intension, and emotion. The 4D+ sensation includes 3D vision, 3D sound, force and torque, touch, and movements. The coexistent space is generated by seamless integration of real, virtual, and remote worlds while networked users experience the feeling of coexistence through 4D+ bi-directional interaction. Initial software framework and experimental results for interaction between multiple remote users are shown successfully.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126514875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Activity recognition in unconstrained RGB-D video using 3D trajectories 使用3D轨迹的无约束RGB-D视频中的活动识别
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence Pub Date : 2014-11-24 DOI: 10.1145/2668956.2668961
Yang Xiao, Gangqiang Zhao, Junsong Yuan, D. Thalmann
{"title":"Activity recognition in unconstrained RGB-D video using 3D trajectories","authors":"Yang Xiao, Gangqiang Zhao, Junsong Yuan, D. Thalmann","doi":"10.1145/2668956.2668961","DOIUrl":"https://doi.org/10.1145/2668956.2668961","url":null,"abstract":"Human activity recognition in unconstrained RGB--D videos has extensive applications in surveillance, multimedia data analytics, human-computer interaction, etc, but remains a challenging problem due to the background clutter, camera motion, viewpoint changes, etc. We develop a novel RGB--D activity recognition approach that leverages the dense trajectory feature in RGB videos. By mapping the 2D positions of the dense trajectories from RGB video to the corresponding positions in the depth video, we can recover the 3D trajectory of the tracked interest points, which captures important motion information along the depth direction. To characterize the 3D trajectories, we apply motion boundary histogram (MBH) to depth direction and propose 3D trajectory shape descriptors. Our proposed 3D trajectory feature is a good complementary to dense trajectory feature extracted from RGB video only. The performance evaluation on a challenging unconstrained RGB--D activity recognition dataset, i.e., Hollywood 3D, shows that our proposed method outperforms the baseline methods (STIP-based) significantly, and achieves the state-of-the-art performance.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117236130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
From fiber to fabric: interactive clothing for virtual humans 从纤维到织物:虚拟人的互动服装
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence Pub Date : 2014-11-24 DOI: 10.1145/2668956.2668959
G. Baciu, Wingo Sai-Keung Wong
{"title":"From fiber to fabric: interactive clothing for virtual humans","authors":"G. Baciu, Wingo Sai-Keung Wong","doi":"10.1145/2668956.2668959","DOIUrl":"https://doi.org/10.1145/2668956.2668959","url":null,"abstract":"The virtual world is currently limited in the graphic representation and visualization of material designs. The limitation is in part due to the limited range and the relative simplicity of spectral properties of the materials simulated in a virtual environment. Our work starts with a fast classification and retrieval for handling the large numbers of multi-scale texture samples of complex deformable materials, such as, woven and knitted fabrics. We have developed a general system that can also serve as a unified platform for other material analysis and classification and a 3D panel design system for 3D clothing modeling and draping. Multi-scale color theme indexing for image acquisition and retrieval can be more intuitively supported by a multi-touch gesture interface that is now the preferred mode of interacting with tablets, screens, and other visual communication devices. These also add to the collaborative modes of input and retrieval in fabric and fashion design for virtual agents. In this paper we describe the process of texture analysis of real fabric materials and preliminary models for the 3D texture generation for virtual clothing, color theme design, and ultimately 3D draping based on robust collision detection methods.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"26 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133686244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polynormal Fisher vector for activity recognition from depth sequences 深度序列活动识别的多法线Fisher向量
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence Pub Date : 2014-11-24 DOI: 10.1145/2668956.2668962
Xiaodong Yang, Yingli Tian
{"title":"Polynormal Fisher vector for activity recognition from depth sequences","authors":"Xiaodong Yang, Yingli Tian","doi":"10.1145/2668956.2668962","DOIUrl":"https://doi.org/10.1145/2668956.2668962","url":null,"abstract":"The advent of depth sensors has facilitated a variety of visual recognition tasks including human activity understanding. This paper presents a novel feature representation to recognize human activities from video sequences captured by a depth camera. We assemble local neighboring hypersurface normals from a depth sequence to form the polynormal which jointly encodes local motion and shape cues. Fisher vector is employed to aggregate the low-level polynormals into the Polynormal Fisher Vector. In order to capture the global spatial layout and temporal order, we employ a spatio-temporal pyramid to subdivide a depth sequence into a set of space-time cells. Polynormal Fisher Vectors from these cells are combined as the final representation of a depth video. Experimental results demonstrate that our method achieves the state-of-the-art results on the two public benchmark datasets, i.e., MSRAction3D and MSRGesture3D.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132977758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Tracking and fusion for multiparty interaction with a virtual character and a social robot 跟踪和融合与虚拟角色和社交机器人的多方互动
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence Pub Date : 2014-11-24 DOI: 10.1145/2668956.2668958
Zerrin Yumak, Jianfeng Ren, N. Magnenat-Thalmann, Junsong Yuan
{"title":"Tracking and fusion for multiparty interaction with a virtual character and a social robot","authors":"Zerrin Yumak, Jianfeng Ren, N. Magnenat-Thalmann, Junsong Yuan","doi":"10.1145/2668956.2668958","DOIUrl":"https://doi.org/10.1145/2668956.2668958","url":null,"abstract":"To give human-like capabilities to artificial characters, we should equip them with the ability of inferring user states. These artificial characters should understand the users' behaviors through various sensors and respond back using multimodal output. Besides natural multimodal interaction, they should also be able to communicate with multiple users and among each other in multiparty interactions. Previous work on interactive virtual humans and social robots mainly focuses on one-to-one interactions. In this paper, we study tracking and fusion aspects of multiparty interactions. We first give a general overview of our proposed multiparty interaction system and mention how it is different from previous work. Then, we provide the details of the tracking and fusion component including speaker identification, addressee detection and a dynamic user entrance/leave mechanism based on user re-identification using a Kinect sensor. Finally, we present a case study with the system and provide a discussion on the current capabilities, limitations and future work.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130909203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence SIGGRAPH亚洲2014自主虚拟人和远程呈现社交机器人
{"title":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","authors":"","doi":"10.1145/2668956","DOIUrl":"https://doi.org/10.1145/2668956","url":null,"abstract":"","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"440 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123583150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信