{"title":"Pose Overcomplete Automatic Registration Method for Video Based Robust Face Recognition","authors":"Yikui Zhai, Hui Ma, Ying Xu","doi":"10.1109/ICMIP.2017.42","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.42","url":null,"abstract":"In ideal conditions with good illumination, posevariation, resolution, and registration, the performance ofrecognition system has reached a relatively high level. However,the impact of pose variation and registration factors hasntbeen well solved. In practical system, single face with manuallyregistration is often adopted in the registration process. Butthere are some limitations in the single registered face image,and manually way of registration is not convenient for theusers. Also the recognition performance based on the singleface image matching is inevitably disturbed and restricted. Inpractical situation, more than one face images can be collected.Multiple image of the same person than a single image featurecan capture more intra variation information in the same class.Feature information of multiple images are introduced in therecognition matching, which will benefit to improve theaccuracy of face recognition. In this paper, a poseovercomplete automatic registration method is proposed tosolve this problem in the registration process. In the proposedmethod, we estimate the pose automatically in real-time byutilizing the detected landmarks information, with posevariation face images stored for matching templates.Experimental results show that the proposed method can notonly overcome the influence of the pose variation inrecognition, but also can solve the problem of non-ideal poseregistration, thus improve the recognition accuracy in practicalface recognition system.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124158824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chau Kien Tsong, Z. Samsudin, W. Jaafar, W. Yahaya
{"title":"Framing Foundational Taxonomy of Multimedia Tangibility in Education Setting for Children","authors":"Chau Kien Tsong, Z. Samsudin, W. Jaafar, W. Yahaya","doi":"10.1109/ICMIP.2017.21","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.21","url":null,"abstract":"The paper describes a new foundational taxonomy of tangible multimedia, a genre of multimedia which deploys the use of tangible artefacts, based on Piagets theory of cognitive development. A taxonomy for setting tangibility in multimedia realm is required for grounding guideline stipulating the consistent way of using tangible artefacts within multimedia context. This paper begins discussion by looking into facets of tangibility dimension by way of literature review. The taxonomy suggests multimedia tangibility can be achieved at least on six facets, namely abdication, loose, naturalness, link semantics, cardinality, and comprehensive mapping facets. Discussion followed by elaboration of each of their associated functions. The paper is concluded with a brief description of a research results, which revealed the efficacy of tangible multimedia compliant with some of the facets of the taxonomy in elevating learners learning performance.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Selection of the Best Despeckle Filter of Ultrasound Images","authors":"Ghada Nady Hussien Abd El-Gwad, Yasser M. K. Omar","doi":"10.1109/ICMIP.2017.46","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.46","url":null,"abstract":"Ultrasound imaging is considered as the largest medical imaging modalities even it suffers from despeckle noise. While there are dissimilar despeckling techniques to remove noise, they are not efficient with all images. In addition, the physician will not be able to select the best technique manually. The four despeckling techniques are; linear filter, non-linear filter, diffusion filter and wavelet filter. This paper implements these techniques on a specific dataset. The results are evaluated based on the expertise opinion. Moreover, a comparison is conducted between the expertise opinion and the extracted features from both original and despeckles images. We apply parallel coordinate to visualize the extracted features before and after applying best despeckle techniques to know the dominant features that lead to choose the suitable technique. The results show that there are dominant features like contrast, correlation, entropy, mean and variance","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123647095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Approach Based on Multi-feature for Cooperative Target Detection","authors":"Guo-qiang Sun, X. Hao, Xiaodong Zhang, Zhenjie Zhang","doi":"10.1109/ICMIP.2017.22","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.22","url":null,"abstract":"Cooperative target has been widely applied in vision-based navigation for unmanned aerial vehicle. For overcoming the targets susceptibility of the surroundings, a fast and accurate detection approach based on multiple features is put forward. Firstly, utilize the pyramid image preprocessing method to eliminate some noise. Then the image features consisting of image contours, Hu moment invariants and FAST corners, are selected to judge and identify the cooperative object. Moreover, the algorithm is improved and accelerated in order to meet the real-time requirement. Experiments show that the proposed approach has a better adaptability and robustness for cooperation target recognition, under different scales, different angles and environment disturbance.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124497095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Image Quality Assessment Metric Based on Contourlet and SVD","authors":"Shuang Liang, Lei Sun","doi":"10.1109/ICMIP.2017.1","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.1","url":null,"abstract":"The purpose of research on image quality assessment (IQA) is to find proper methods to measure the quality of images. The subjective evaluation score by HVS is usually used as a standard to be compared with. So, the more similar the measuring process is to HVS, the better the result should be. Motivated by this idea, contourlet transform and singular value decomposition are used in this article to establish an IQA metric because contourlet transform has the characteristics similar to HVS. Our IQA metric according to full-references is called CT-SVD. The new metric is tested on the image database TID2013 and the performance is compared with those of present metrics. It is shown that our CT-SVD metric reaches more consistency with the subjective image quality assessment.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127024904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiancai Zhang, Zhuang Miao, Yang Li, Yulong Xu, Jiabao Wang, Bo Zhou, Gang Tao
{"title":"Scale-Adaptive Regression Position Prediction Tracking","authors":"Xiancai Zhang, Zhuang Miao, Yang Li, Yulong Xu, Jiabao Wang, Bo Zhou, Gang Tao","doi":"10.1109/ICMIP.2017.19","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.19","url":null,"abstract":"Traditional kernelized correlation filter tracking methods use the target position in the current frame to estimate the moving target initial position in the next frame. For fast moving target, these methods lose the target easily. To cope with this problem, a novel scale-adaptive regression position prediction tracking approach is proposed. This algorithm employs regression prediction method to predict the initial position in the next frame. Then the kernelized correlation filter method is utilized to obtain the final target position. For further improving the accuracy and robustness, we exploit a scale pyramid model to estimate the target scale. Experimental results over 10 benchmark sequences demonstrate the proposed approach performs favorably against the state-of-the-art tracking methods.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127393424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face Recognition Based on Circularly Symmetrical Gabor Transforms and Collaborative Representation","authors":"Y. Sun, Huiyuan Wang","doi":"10.1109/ICMIP.2017.32","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.32","url":null,"abstract":"Compared to the traditional Gabor transform, the circularly symmetrical Gabor transform (CSGT) not only retains the characteristics of local and multi-resolution analysis, but also has the remarkable advantages of less redundancy and rotational invariance. Simultaneously, the collaborative representation-based classification with regularized least square (CRC-RLS) overcomes the shortcoming of the high computational complexity in the sparse representation-based classification (SRC). However, both classification algorithms still use the global features of the image, ignoring the importance of local features in the face images. In this paper, the face images are first mapped onto the CSGT domain, and then the amplitude images are chosen as the sample images. Finally, CRC is used to classify different faces. The experimental results on AR, FERET and Extended Yale B face databases show that the proposed algorithm achieves higher recognition rates and better robustness.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132165574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Y. Dissanayake, A. Priyadarshana, B. Jayawardhana, L.A.T.D. Chathurika, N. Karunasinghe
{"title":"Light Weight Solution for Stem and Leaf Classification in Tea Industry, Hybrid Color Space for Black Tea Classification","authors":"A. Y. Dissanayake, A. Priyadarshana, B. Jayawardhana, L.A.T.D. Chathurika, N. Karunasinghe","doi":"10.1109/ICMIP.2017.67","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.67","url":null,"abstract":"This research proposes a new approach for stem and leaf classification in tea industry by deriving new color components which is simple in implementation, high in accuracy and low in cost than the multilayer neural network approaches. It has been used 270 set of tea stem and leaf sample in order to get 95% accuracy and the images were captured using a DSLR Nikon D3100 camera under controlled light condition. This paper includes an algorithm to pre-process images using image processing algorithms such as Otsu algorithm for threshold detection and Moore-Neighbor tracing algorithm for contour detection. Furthermore, it has been proposed a solution to select color components from existing color spaces which have highest discriminating power, deriving new color components by applying feature selection algorithms and calculating classification threshold and accuracy for each feature. The threshold values of the classification points will be used to differentiate stems and leaves as a single layer neural network, which is more lightweight than multi-layer neural network, which will also give a higher accuracy.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122996821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relative Radiometric Correction of Imagery Based on the Side-Slither Method","authors":"Yan Li, Bingxian Zhang, H. He","doi":"10.1109/ICMIP.2017.50","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.50","url":null,"abstract":"The relative radiometric calibration is essential to get high-quality remote sensing images. An efficient way to bypass the quest of uniformity is to use the satellite agility in order to align the ground projection of the scanline on the ground velocity. This weird viewing principle(side-slither) allows all the detectors to view the same landscape. A relative radiometric calibration model based on side-slither data was built established. Thus, non-linear normalization coefficients can be computed by a histogram matching method and the Laboratory radiometric model. Finally, the validity of this method was verified by GF-1 side-slither calibration data.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127591217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Saliency Detection Based on Disperse Degree of Color","authors":"Jianshe Ma, Libo Guo, Ping Su","doi":"10.1109/ICMIP.2017.60","DOIUrl":"https://doi.org/10.1109/ICMIP.2017.60","url":null,"abstract":"Saliency detection aims to focus attention on the important parts of a map, which is an excellent ability of human visual system. In this paper, we present a saliency detection model based on the principle that the pixels belong to the background are more disperse than the ones of the target area. Color contrast in different channels is employed to classify the pixels. Our method outperformed five art-of-the-state saliency detection algorithms on the ground-truth evaluation experiment by achieving good precision-recall curve and indicates that disperse degree of color is an importance feature in saliency detection.","PeriodicalId":227455,"journal":{"name":"2017 2nd International Conference on Multimedia and Image Processing (ICMIP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127670594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}