{"title":"说话头:介绍三维运动场在动作研究中的工具","authors":"J. Neumann, Y. Aloimonos","doi":"10.1109/HUMO.2000.897367","DOIUrl":null,"url":null,"abstract":"We demonstrate a method to complete three-dimensional (3D) motion fields on a face to serve as an intermediate representation for the study of actions. Twelve synchronized and calibrated cameras are positioned all around a talking person and observe its head in motion. We represent the head as a deformable mesh, which is fitted in a global optimization step to silhouette-contour and multi-camera stereo data derived from all images. The non-rigid displacement of the mesh from frame to frame, the 3D motion field, is determined from the normal flow information in all the images. We integrate these cues over time, thus producing a spatio-temporal representation of the talking head. Our ability to estimate 3D motion fields points to a new framework for the study of action. Using multicamera configurations we can estimate a sequence of evolving 3D motion fields representing specific actions. Then, by performing a geometric and statistical analysis on these structures, we can achieve dimensionality reduction and thus come up with powerful representations of generic human action.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Talking heads: introducing the tool of 3D motion fields in the study of action\",\"authors\":\"J. Neumann, Y. Aloimonos\",\"doi\":\"10.1109/HUMO.2000.897367\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We demonstrate a method to complete three-dimensional (3D) motion fields on a face to serve as an intermediate representation for the study of actions. Twelve synchronized and calibrated cameras are positioned all around a talking person and observe its head in motion. We represent the head as a deformable mesh, which is fitted in a global optimization step to silhouette-contour and multi-camera stereo data derived from all images. The non-rigid displacement of the mesh from frame to frame, the 3D motion field, is determined from the normal flow information in all the images. We integrate these cues over time, thus producing a spatio-temporal representation of the talking head. Our ability to estimate 3D motion fields points to a new framework for the study of action. Using multicamera configurations we can estimate a sequence of evolving 3D motion fields representing specific actions. Then, by performing a geometric and statistical analysis on these structures, we can achieve dimensionality reduction and thus come up with powerful representations of generic human action.\",\"PeriodicalId\":384462,\"journal\":{\"name\":\"Proceedings Workshop on Human Motion\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Workshop on Human Motion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HUMO.2000.897367\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Workshop on Human Motion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HUMO.2000.897367","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Talking heads: introducing the tool of 3D motion fields in the study of action
We demonstrate a method to complete three-dimensional (3D) motion fields on a face to serve as an intermediate representation for the study of actions. Twelve synchronized and calibrated cameras are positioned all around a talking person and observe its head in motion. We represent the head as a deformable mesh, which is fitted in a global optimization step to silhouette-contour and multi-camera stereo data derived from all images. The non-rigid displacement of the mesh from frame to frame, the 3D motion field, is determined from the normal flow information in all the images. We integrate these cues over time, thus producing a spatio-temporal representation of the talking head. Our ability to estimate 3D motion fields points to a new framework for the study of action. Using multicamera configurations we can estimate a sequence of evolving 3D motion fields representing specific actions. Then, by performing a geometric and statistical analysis on these structures, we can achieve dimensionality reduction and thus come up with powerful representations of generic human action.