Melissa Kremer, Peter Caruana, M. B. Haworth, Mubbasir Kapadia, P. Faloutsos
{"title":"PSM: Parametric Saliency Maps for Autonomous Pedestrians","authors":"Melissa Kremer, Peter Caruana, M. B. Haworth, Mubbasir Kapadia, P. Faloutsos","doi":"10.1145/3487983.3488299","DOIUrl":null,"url":null,"abstract":"Modeling visual attention is an important aspect of simulating realistic virtual humans. This work proposes a parametric model and method for generating real-time saliency maps from the perspective of virtual agents which approximate those of vision-based saliency approaches. The model aggregates a saliency score from user-defined parameters for objects and characters in an agent’s view and uses that to output a 2D saliency map which can be modulated by an attention field to incorporate 3D information as well as a character’s state of attentiveness. The aggregate and parameterized structure of the method allows the user to model a range of diverse agents. The user may also expand the model with additional layers and parameters. The proposed method can be combined with normative and pathological models of the human visual field and gaze controllers, such as the recently proposed model of egocentric distractions for casual pedestrians that we use in our results.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3487983.3488299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Modeling visual attention is an important aspect of simulating realistic virtual humans. This work proposes a parametric model and method for generating real-time saliency maps from the perspective of virtual agents which approximate those of vision-based saliency approaches. The model aggregates a saliency score from user-defined parameters for objects and characters in an agent’s view and uses that to output a 2D saliency map which can be modulated by an attention field to incorporate 3D information as well as a character’s state of attentiveness. The aggregate and parameterized structure of the method allows the user to model a range of diverse agents. The user may also expand the model with additional layers and parameters. The proposed method can be combined with normative and pathological models of the human visual field and gaze controllers, such as the recently proposed model of egocentric distractions for casual pedestrians that we use in our results.