{"title":"Phase in model-free perception of gait","authors":"J. Boyd, J. Little","doi":"10.1109/HUMO.2000.897363","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897363","url":null,"abstract":"Variations in human gaits are manifest in the timing of the many combined motions in the gait. In periodic systems, such as a gait, timing reduces to phase. Therefore, in order to capture the important information in the timing patterns in a gait, one must consider phase. Gaits vary for several reasons, including different builds, moods of individuals, fatigue and injury. We investigate the relationship between the model-free shape-of-motion phase analysis and a subjective description of gait, such as a normal gait versus a tired gait or a shuffle, by analyzing several gait image sequences that differ subjectively. A simple model based on a phasor representation of gait motion relates the pendulum-like motion of limbs to shape-of-motion features. Our ultimate goal is to develop a gait feature space that can be partitioned according to subjective perception of gait. Gait features that vary with subjective changes in gait lead in this direction.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128725825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ray carving with gradients and motion","authors":"B. Stuart, Y. Aloimonos","doi":"10.1109/HUMO.2000.897375","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897375","url":null,"abstract":"Recent developments in camera and computer technology have made multiple-camera systems less expensive and more usable. Using such systems, we can generate 3D models of human activity for use in surveillance, as avatars, or for 3D effects generation. Some approaches to model generation are voxel coloring, space carving, silhouette intersection, and the combination of multiple stereo reconstructions. Our attempt to overcome various shortcomings of the above approaches has led to the use of image derivatives and motion to determine the shape and motion of the activity in view. Direct computations of the gradient directions and the image motion normal to the gradient provide the information to generate a 3D+motion model that is consistent with all the image data. Data structures encode visibility information from each of the cameras surrounding the scene, allowing efficient determination of the subsets of measurements to be combined in a modified space-carving system. The main contributions of this paper are: the development of a system for combining multiple image gradient measurements to determine the 3D iso-brightness direction and its consistency, a system for combining multiple normal flow measurements to determine the motion normal to the iso-brightness direction, and a data structure based on the rays passing through the centers of projection and the image pixels, forming an unbounded projective grid through the space of the scene and allowing efficient determination and updating of scene point visibility. Reconstructions of human motion using 20 cameras are presented.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121681111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling the constraints of human hand motion","authors":"John Y. Lin, Ying Wu, Thomas S. Huang","doi":"10.1109/HUMO.2000.897381","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897381","url":null,"abstract":"Hand motion capture is one of the most important parts of gesture interfaces. Many current approaches to this task generally involve a formidable nonlinear optimization problem in a large search space. Motion capture can be achieved more cost-efficiently when considering the motion constraints of a hand. Although some constraints can be represented as equalities or inequalities, there exist many constraints which cannot be explicitly represented. In this paper, we propose a learning approach to model the hand configuration space directly. The redundancy of the configuration space can be eliminated by finding a lower-dimensional subspace of the original space. Finger motion is modeled in this subspace based on the linear behavior observed in the real motion data collected by a CyberGlove. Employing the constrained motion model, we are able to efficiently capture finger motion from video inputs. Several experiments show that our proposed model is helpful for capturing articulated motion.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115065907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realistic synthesis of novel human movements from a database of motion capture examples","authors":"L. Molina-Tanco, A. Hilton","doi":"10.1109/HUMO.2000.897383","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897383","url":null,"abstract":"Presents a system that can synthesize novel motion sequences from a database of motion capture examples. This is achieved through learning a statistical model from the captured data which enables the realistic synthesis of new movements by sampling the original captured sequences. New movements are synthesized by specifying the start and end keyframes. The statistical model identifies segments of the original motion capture data to generate novel motion sequences between the keyframes. The advantage of this approach is that it combines the flexibility of keyframe animation with the realism of motion capture data.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117066681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Activity monitoring and summarization for an intelligent meeting room","authors":"I. Mikic, Kohsia S. Huang, M. Trivedi","doi":"10.1109/HUMO.2000.897379","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897379","url":null,"abstract":"Intelligent meeting rooms should support efficient and effective interactions among their occupants. In this paper, we present our efforts toward building intelligent environments using a multimodal sensor network of static cameras, active (pan/tilt/zoom) cameras and microphone arrays. Active cameras are used to capture details associated with interesting events. The goal is not only to make a system that supports multi-person interactions in the environment in real time, but also to have the system remember the past, enabling reviews of past events in an intuitive and efficient manner. In this paper, we present the system specifications and major components, integration framework, active network control procedures and experimental studies involving multi-person interactions in an intelligent meeting room environment.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"15 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127652331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Individual recognition from periodic activity using hidden Markov models","authors":"Q. He, C. Debrunner","doi":"10.1109/HUMO.2000.897370","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897370","url":null,"abstract":"We present a method for recognizing individuals from their walking and running gait. The method is based on Hu moments of the motion segmentation in each frame. Periodicity is detected in such a sequence of feature vectors by minimizing the sum of squared differences, and the individual is recognized from the feature vector sequence using hidden Markov models. Comparisons are made to earlier periodicity detection approaches and to earlier individual recognition approaches. Experiments show the successful recognition of individuals (and their gait) in frontoparallel sequences.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124293611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust head motion computation by taking advantage of physical properties","authors":"Zicheng Liu, Zhengyou Zhang","doi":"10.1109/HUMO.2000.897374","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897374","url":null,"abstract":"Head motion determination is an important problem for many applications including face modeling and tracking. We present a new algorithm to compute the head motion between two views from the correspondences of five feature points (eye corners, mouth corners and nose tip), and zero or more additional image point matches. The algorithm takes advantage of the physical properties of the feature points, such as symmetry, and it significantly improves the robustness of head motion estimation. This is achieved by reducing the number of unknowns to be estimated, thus increasing information redundancy. This idea can be easily extended to any number of feature point correspondences.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130320894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Person counting using stereo","authors":"D. Beymer","doi":"10.1109/HUMO.2000.897382","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897382","url":null,"abstract":"Stores and shopping malls would like to keep track of shopper volume by employing automatic techniques for counting shoppers. Existing approaches instrument doors with infrared beams and count beam interruptions, but this approach cannot resolve groups of people well. We are applying a vision-based approach that detects and tracks people from a stereo camera mounted above a door and pointing down. After applying real-time stereo vision and 3D image reconstruction, the system segments the scene by selecting stereo pixels falling inside a 3D volume of interest, which is placed to capture the heads and torsos of adult shoppers. The main novelties of our approach include (1) remapping the stereo disparities to an orthographic \"occupancy map\", which simplifies person modeling, and (2) tracking people using a Gaussian mixture model. On a test set of 900 enter/exit events in four hours of video, our system has achieved a net counting error rate of just 1.4%.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An incremental approach towards automatic model acquisition for human gesture recognition","authors":"M. Walter, A. Psarrou, S. Gong","doi":"10.1109/HUMO.2000.897369","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897369","url":null,"abstract":"The recognition of natural gestures typically involves: the collection of training examples; the generation of models; and the determination of a model that is most likely to have generated an observation sequence. The first step however, the collection of training examples, typically involves manual segmentation and hand labelling of image sequences. This is a time consuming and labour intensive process and is only feasible for a limited set of gestures. To overcome this problem we suggest that gestures can be viewed as a repetitive sequence of atomic movements, similar to phonemes in speech. We present an approach: to automatically segment an arbitrary observation sequence of a natural gesture, using only contextual information derived from the observation sequence itself; and to incrementally extract a set of atomic movements for the automatic model acquisition of natural gestures. Atomic components are modelled as semi-continuous hidden Markov models and the search for repetitive sequences is done using a discrete version of CONDENSATION that is no longer based on factored sampling.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127920284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human motion tracking system based on skeleton and surface integration model using pressure sensors distribution bed","authors":"T. Harada, Tomomasa Sato, Taketoshi Mori","doi":"10.1109/HUMO.2000.897378","DOIUrl":"https://doi.org/10.1109/HUMO.2000.897378","url":null,"abstract":"Proposes a human motion tracking system based on a full-body model and a pressure-sensor distribution bed. The full-body model consists of a skeleton and a surface model. BVH files are used as the skeleton model that describes a hierarchy of joints and links. Wavefront object files are used as the surface model that describes the geometry of the surface. The bed has 210 pressure sensors that are under the mattress. It can measure the pressure distribution image of a lying person. The lying person's motion is tracked by considering potential energy, momentum and the difference between the measured pressure distribution image and the pressure distribution image that is calculated by the full-body model. Experimental results reveal that the realized system can track not only horizontal motions such as opening and closing legs but also vertical motions such as raising the upper body.","PeriodicalId":384462,"journal":{"name":"Proceedings Workshop on Human Motion","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124713719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}