{"title":"Facial motion analysis using template matching","authors":"Noureddine Cherabit, A. Djeradi, F. Chelali","doi":"10.1109/CCSSP49278.2020.9151797","DOIUrl":null,"url":null,"abstract":"Motion is a real issue in video sequence analysis since it describes an object in three dimensions whereas the images are defined by a 3D scene projection in a 2D plane. An important problem in tracking of a point between two successive images is that only one pixel can’t be tracked since the intensity pixel value can both change due to noise and can be confused with adjacent pixels. As a result, it’s often impossible to determine the pixel location in the next frame, based only on the local information.In this article, we are interested in facial motion tracking for talking faces. When talking, points on the face move and their surround intensity change in a complex way. Therefore, we propose a comparison study of three methods for facial motion tracking to estimate the trajectory of each facial point on a talking face by analyzing templates intensities levels belonging two successive images. The first method is based on block matching: Normalized Sum of Squared Differences (NSSD), the second on normalized cross correlation (NCC), whereas the third method concerns the Kanade–Lucas–Tomasi Tracking (KLT tracker). Results obtained are compared, based on tracked error on the trajectory of a video by measuring the root-mean-squared intensity difference between the current and the last template.","PeriodicalId":401063,"journal":{"name":"020 1st International Conference on Communications, Control Systems and Signal Processing (CCSSP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"020 1st International Conference on Communications, Control Systems and Signal Processing (CCSSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCSSP49278.2020.9151797","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Motion is a real issue in video sequence analysis since it describes an object in three dimensions whereas the images are defined by a 3D scene projection in a 2D plane. An important problem in tracking of a point between two successive images is that only one pixel can’t be tracked since the intensity pixel value can both change due to noise and can be confused with adjacent pixels. As a result, it’s often impossible to determine the pixel location in the next frame, based only on the local information.In this article, we are interested in facial motion tracking for talking faces. When talking, points on the face move and their surround intensity change in a complex way. Therefore, we propose a comparison study of three methods for facial motion tracking to estimate the trajectory of each facial point on a talking face by analyzing templates intensities levels belonging two successive images. The first method is based on block matching: Normalized Sum of Squared Differences (NSSD), the second on normalized cross correlation (NCC), whereas the third method concerns the Kanade–Lucas–Tomasi Tracking (KLT tracker). Results obtained are compared, based on tracked error on the trajectory of a video by measuring the root-mean-squared intensity difference between the current and the last template.