{"title":"Dynamic context driven human detection and tracking in meeting scenarios","authors":"Peng Dai, L. Tao, Guangyou Xu","doi":"10.5220/0002070200310038","DOIUrl":null,"url":null,"abstract":"As a significant part of context-aware systems, human-centered visual processing is required to be adaptive and interactive within dynamic context in real-life situation. In this paper a novel bottom-up and top-down integrated approach is proposed to solve the problem of dynamic context driven visual processing in meeting scenarios. A set of visual detection, tracking and verification modules are effectively organized to extract rough-level visual information, based on which a bottom-up context analysis is performed through Bayesian Network. In reverse, results of scene analysis are applied as top-down guidance to control refined level visual processing. The system has been tested under real-life meeting environment that includes three typical scenarios: speech, discussion and meeting break. The experiments show the effectiveness and robustness of our approach within continuously changing meeting scenarios and dynamic context.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer Vision Theory and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0002070200310038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
As a significant part of context-aware systems, human-centered visual processing is required to be adaptive and interactive within dynamic context in real-life situation. In this paper a novel bottom-up and top-down integrated approach is proposed to solve the problem of dynamic context driven visual processing in meeting scenarios. A set of visual detection, tracking and verification modules are effectively organized to extract rough-level visual information, based on which a bottom-up context analysis is performed through Bayesian Network. In reverse, results of scene analysis are applied as top-down guidance to control refined level visual processing. The system has been tested under real-life meeting environment that includes three typical scenarios: speech, discussion and meeting break. The experiments show the effectiveness and robustness of our approach within continuously changing meeting scenarios and dynamic context.