M. B. Valentín, C. Bom, M. Albuquerque, M. Albuquerque, E. Faria, M. Correia, R. Surmas
{"title":"On a method for Rock Classification using Textural Features and Genetic Optimization","authors":"M. B. Valentín, C. Bom, M. Albuquerque, M. Albuquerque, E. Faria, M. Correia, R. Surmas","doi":"10.7437/NT2236-7640/2017.01.003","DOIUrl":"https://doi.org/10.7437/NT2236-7640/2017.01.003","url":null,"abstract":"In this work we present a method to classify a set of rock textures based on a Spectral Analysis and the extraction of the texture Features of the resulted images. Up to 520 features were tested using 4 different filters and all 31 different combinations were verified. The classification process relies on a Naive Bayes classifier. We performed two kinds of optimizations: statistical optimization with covariance-based Principal Component Analysis (PCA) and a genetic optimization, for 10,000 randomly defined samples, achieving a final maximum classification success of 91% against the original 70% success ratio (without any optimization nor filters used). After the optimization 9 types of features emerged as most relevant.","PeriodicalId":185904,"journal":{"name":"arXiv: Computer Vision and Pattern Recognition","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123887581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seungryul Baek, Zhiyuan Shi, M. Kawade, Tae-Kyun Kim
{"title":"Kinematic-Layout-aware Random Forests for Depth-based Action Recognition","authors":"Seungryul Baek, Zhiyuan Shi, M. Kawade, Tae-Kyun Kim","doi":"10.5244/C.31.13","DOIUrl":"https://doi.org/10.5244/C.31.13","url":null,"abstract":"In this paper, we tackle the problem of 24 hours-monitoring patient actions in a ward such as \"stretching an arm out of the bed\", \"falling out of the bed\", where temporal movements are subtle or significant. In the concerned scenarios, the relations between scene layouts and body kinematics (skeletons) become important cues to recognize actions; however they are hard to be secured at a testing stage. To address this problem, we propose a kinematic-layout-aware random forest which takes into account the kinematic-layout (ie layout and skeletons), to maximize the discriminative power of depth image appearance. We integrate the kinematic-layout in the split criteria of random forests to guide the learning process by 1) determining the switch to either the depth appearance or the kinematic-layout information, and 2) implicitly closing the gap between two distributions obtained by the kinematic-layout and the appearance, when the kinematic-layout appears useful. The kinematic-layout information is not required for the test data, thus called \"privileged information prior\". The proposed method has also been testified in cross-view settings, by the use of view-invariant features and enforcing the consistency among synthetic-view data. Experimental evaluations on our new dataset PATIENT, CAD-60 and UWA3D (multiview) demonstrate that our method outperforms various state-of-the-arts.","PeriodicalId":185904,"journal":{"name":"arXiv: Computer Vision and Pattern Recognition","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114640383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supervised Incremental Hashing","authors":"B. Ozdemir, Mahyar Najibi, L. Davis","doi":"10.5244/C.30.50","DOIUrl":"https://doi.org/10.5244/C.30.50","url":null,"abstract":"We propose an incremental strategy for learning hash functions with kernels for large-scale image search. Our method is based on a two-stage classification framework that treats binary codes as intermediate variables between the feature space and the semantic space. In the first stage of classification, binary codes are considered as class labels by a set of binary SVMs; each corresponds to one bit. In the second stage, binary codes become the input space of a multi-class SVM. Hash functions are learned by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from a previously unseen class, we describe an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate the effectiveness of the proposed hashing method, Supervised Incremental Hashing (SIH), over the state-of-the-art supervised hashing methods.","PeriodicalId":185904,"journal":{"name":"arXiv: Computer Vision and Pattern Recognition","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121973203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Yao, Nicolas Ballas, Kyunghyun Cho, John R. Smith, Yoshua Bengio
{"title":"Oracle Performance for Visual Captioning","authors":"L. Yao, Nicolas Ballas, Kyunghyun Cho, John R. Smith, Yoshua Bengio","doi":"10.5244/C.30.141","DOIUrl":"https://doi.org/10.5244/C.30.141","url":null,"abstract":"The task of associating images and videos with a natural language description has attracted a great amount of attention recently. Rapid progress has been made in terms of both developing novel algorithms and releasing new datasets. Indeed, the state-of-the-art results on some of the standard datasets have been pushed into the regime where it has become more and more difficult to make significant improvements. Instead of proposing new models, this work investigates the possibility of empirically establishing performance upper bounds on various visual captioning datasets without extra data labelling effort or human evaluation. In particular, it is assumed that visual captioning is decomposed into two steps: from visual inputs to visual concepts, and from visual concepts to natural language descriptions. One would be able to obtain an upper bound when assuming the first step is perfect and only requiring training a conditional language model for the second step. We demonstrate the construction of such bounds on MS-COCO, YouTube2Text and LSMDC (a combination of M-VAD and MPII-MD). Surprisingly, despite of the imperfect process we used for visual concept extraction in the first step and the simplicity of the language model for the second step, we show that current state-of-the-art models fall short when being compared with the learned upper bounds. Furthermore, with such a bound, we quantify several important factors concerning image and video captioning: the number of visual concepts captured by different models, the trade-off between the amount of visual elements captured and their accuracy, and the intrinsic difficulty and blessing of different datasets.","PeriodicalId":185904,"journal":{"name":"arXiv: Computer Vision and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124242431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face Detection with a 3D Model","authors":"James Picker, Adrian Barbu","doi":"10.1016/B978-0-12-811889-4.00006-3","DOIUrl":"https://doi.org/10.1016/B978-0-12-811889-4.00006-3","url":null,"abstract":"","PeriodicalId":185904,"journal":{"name":"arXiv: Computer Vision and Pattern Recognition","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133788911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}