A neurobehavioural framework for autonomous animation of virtual human faces

Mark Sagar, D. Bullivant, Paul Robertson, Oleg Efimov, K. Jawed, R. Kalarot, Tim Wu
{"title":"A neurobehavioural framework for autonomous animation of virtual human faces","authors":"Mark Sagar, D. Bullivant, Paul Robertson, Oleg Efimov, K. Jawed, R. Kalarot, Tim Wu","doi":"10.1145/2668956.2668960","DOIUrl":null,"url":null,"abstract":"We describe a neurobehavioural modeling and visual computing framework for the integration of realistic interactive computer graphics with neural systems modelling, allowing real-time autonomous facial animation and interactive visualization of the underlying neural network models. The system has been designed to integrate and interconnect a wide range of computational neuroscience models to construct embodied interactive psychobiological models of behaviour. An example application of the framework combines models of the facial motor system, physiologically based emotional systems, and basic neural systems involved in early interactive behaviour and learning and embodies them in a virtual infant rendered with realistic computer graphics. The model reacts in real time to visual and auditory input and its own evolving internal processes as a dynamic system. The live state of the model which generates the resulting facial behaviour can be visualized through graphs and schematics or by exploring the activity mapped to the underlying neuroanatomy.","PeriodicalId":220010,"journal":{"name":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2668956.2668960","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26

Abstract

We describe a neurobehavioural modeling and visual computing framework for the integration of realistic interactive computer graphics with neural systems modelling, allowing real-time autonomous facial animation and interactive visualization of the underlying neural network models. The system has been designed to integrate and interconnect a wide range of computational neuroscience models to construct embodied interactive psychobiological models of behaviour. An example application of the framework combines models of the facial motor system, physiologically based emotional systems, and basic neural systems involved in early interactive behaviour and learning and embodies them in a virtual infant rendered with realistic computer graphics. The model reacts in real time to visual and auditory input and its own evolving internal processes as a dynamic system. The live state of the model which generates the resulting facial behaviour can be visualized through graphs and schematics or by exploring the activity mapped to the underlying neuroanatomy.
虚拟人脸自主动画的神经行为框架
我们描述了一个神经行为建模和视觉计算框架,用于将逼真的交互式计算机图形学与神经系统建模相集成,允许实时自主面部动画和底层神经网络模型的交互式可视化。该系统旨在整合和连接广泛的计算神经科学模型,以构建具体的互动行为心理生物学模型。该框架的一个示例应用结合了面部运动系统、基于生理的情感系统和涉及早期互动行为和学习的基本神经系统的模型,并将它们体现在用逼真的计算机图形渲染的虚拟婴儿中。该模型作为一个动态系统,对视觉和听觉输入及其自身不断发展的内部过程实时作出反应。生成最终面部行为的模型的实时状态可以通过图表和示意图或通过探索映射到潜在神经解剖学的活动来可视化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信