{"title":"利用黎曼流形对医疗保健视频中的日常活动分类","authors":"Y. Yun, I. Gu","doi":"10.1109/HealthCom.2016.7749487","DOIUrl":null,"url":null,"abstract":"This paper addresses the problem of classifying activities of daily living in video. The proposed method uses a tree structure of two layers, where in each node of the tree there resides a Riemannian manifold that corresponds to different part-based covariance features. In the first layer, activities are classified according to the dynamics of upper body parts. In the second layer, activities are further classified according to the appearance of local image patches at hands in key frames, where the interacting objects are likely to be attached. The novelties of this paper include: (i) characterizing the motion of upper body parts by a covariance matrix of distances between each pair of key points and the orientations of lines that connect them; (ii) describing human-object interaction by the appearance of local regions around hands in key frames that are selected based on the proximity of hands to other key points; (iii) formulating a pairwise geodesics-based kernel for activity classification on Riemannian manifolds under the log-Euclidean metric. Experiments were conducted on a video dataset containing a total number of 426 video events (activities) from 4 classes. The proposed method is shown to be effective by achieving high classification accuracy (93.79% on average) and small false alarms (1.99% on average) overall, as well as for each individual class.","PeriodicalId":167022,"journal":{"name":"2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Exploiting riemannian manifolds for daily activity classification in video towards health care\",\"authors\":\"Y. Yun, I. Gu\",\"doi\":\"10.1109/HealthCom.2016.7749487\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper addresses the problem of classifying activities of daily living in video. The proposed method uses a tree structure of two layers, where in each node of the tree there resides a Riemannian manifold that corresponds to different part-based covariance features. In the first layer, activities are classified according to the dynamics of upper body parts. In the second layer, activities are further classified according to the appearance of local image patches at hands in key frames, where the interacting objects are likely to be attached. The novelties of this paper include: (i) characterizing the motion of upper body parts by a covariance matrix of distances between each pair of key points and the orientations of lines that connect them; (ii) describing human-object interaction by the appearance of local regions around hands in key frames that are selected based on the proximity of hands to other key points; (iii) formulating a pairwise geodesics-based kernel for activity classification on Riemannian manifolds under the log-Euclidean metric. Experiments were conducted on a video dataset containing a total number of 426 video events (activities) from 4 classes. The proposed method is shown to be effective by achieving high classification accuracy (93.79% on average) and small false alarms (1.99% on average) overall, as well as for each individual class.\",\"PeriodicalId\":167022,\"journal\":{\"name\":\"2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom)\",\"volume\":\"74 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HealthCom.2016.7749487\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HealthCom.2016.7749487","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploiting riemannian manifolds for daily activity classification in video towards health care
This paper addresses the problem of classifying activities of daily living in video. The proposed method uses a tree structure of two layers, where in each node of the tree there resides a Riemannian manifold that corresponds to different part-based covariance features. In the first layer, activities are classified according to the dynamics of upper body parts. In the second layer, activities are further classified according to the appearance of local image patches at hands in key frames, where the interacting objects are likely to be attached. The novelties of this paper include: (i) characterizing the motion of upper body parts by a covariance matrix of distances between each pair of key points and the orientations of lines that connect them; (ii) describing human-object interaction by the appearance of local regions around hands in key frames that are selected based on the proximity of hands to other key points; (iii) formulating a pairwise geodesics-based kernel for activity classification on Riemannian manifolds under the log-Euclidean metric. Experiments were conducted on a video dataset containing a total number of 426 video events (activities) from 4 classes. The proposed method is shown to be effective by achieving high classification accuracy (93.79% on average) and small false alarms (1.99% on average) overall, as well as for each individual class.