José Ortiz-Bejar, Eric Sadit Tellez, Mario Graff, Sabino Miranda-Jiménez, Jesus Ortiz-Bejar, Daniela Moctezuma, Claudia N. Sánchez
{"title":"I3GO+ at RICATIM 2017: A semi-supervised approach to determine the relevance between images and text-annotations","authors":"José Ortiz-Bejar, Eric Sadit Tellez, Mario Graff, Sabino Miranda-Jiménez, Jesus Ortiz-Bejar, Daniela Moctezuma, Claudia N. Sánchez","doi":"10.1109/ROPEC.2017.8261691","DOIUrl":null,"url":null,"abstract":"In this manuscript, we describe our solution for the RedICA Text-Image Matching (RICATIM) challenge. This challenge aims to tackle the image-text matching problem as one of binary classification, that is, given an image-text pair. Therefore, a valid solution must determine if the relation between the image and text is valid. The RICATIM dataset contains a large number of examples that were used to create an algorithm that effectively learns the underlying relations. Vision and language classifiers must deal with high dimensional data; therefore, traditional classification methods increase their learning time and also tend to perform poorly. To tackle the RICATIM challenge, we introduce a novelty approach that improves the classification based on k-nearest neighbor (KNN) classifier. Our proposal relies on the solution of the k centers problem using the Farthest First Traversal algorithm, along with a kernel function. We use those techniques to reduce the dimension effectively while improving the performance of the KNN classifiers. We provide an experimental comparison of our approach showing a significant improvement of state of the art.","PeriodicalId":260469,"journal":{"name":"2017 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROPEC.2017.8261691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this manuscript, we describe our solution for the RedICA Text-Image Matching (RICATIM) challenge. This challenge aims to tackle the image-text matching problem as one of binary classification, that is, given an image-text pair. Therefore, a valid solution must determine if the relation between the image and text is valid. The RICATIM dataset contains a large number of examples that were used to create an algorithm that effectively learns the underlying relations. Vision and language classifiers must deal with high dimensional data; therefore, traditional classification methods increase their learning time and also tend to perform poorly. To tackle the RICATIM challenge, we introduce a novelty approach that improves the classification based on k-nearest neighbor (KNN) classifier. Our proposal relies on the solution of the k centers problem using the Farthest First Traversal algorithm, along with a kernel function. We use those techniques to reduce the dimension effectively while improving the performance of the KNN classifiers. We provide an experimental comparison of our approach showing a significant improvement of state of the art.