MAED '14Pub Date : 2014-11-07DOI: 10.1145/2661821.2661824
M. Torres, G. Qiu
{"title":"Crowd-sourcing Applied to Photograph-Based Automatic Habitat Classification","authors":"M. Torres, G. Qiu","doi":"10.1145/2661821.2661824","DOIUrl":"https://doi.org/10.1145/2661821.2661824","url":null,"abstract":"Habitat classification is a crucial activity for monitoring environmental biodiversity. To date, manual methods, which are laborious, time-consuming and expensive, remain the most successful alternative. Most automatic methods use remote-sensed imagery but remotely sensed images lack the necessary level of detail. Previous studies have treated automatic habitat classification as an image-annotation problem and have developed a framework that uses ground-taken photographs, feature extraction and a random-forest-based classifier to automatically annotate unseen photographs with their habitats. This paper builds on this previous framework with two new contributions that explore the benefits of applying crowd-sourcing methodologies to automatically collect, annotate and classify habitats. First, we use Geograph, a crowd-sourcing photograph website, to collect a larger geo-referenced ground-taken photograph database, with over 3,000 photographs and 11,000 habitats. We tested the original framework on this much larger database and show that it maintains its success rate. Second, we use a crowd-sourcing mechanism to obtain higher-level semantic features, designed to improve the limitations that visual features have for Fine-Grained Visual Categorization (FGVC) problems, such as habitat classification. Results show that the inclusion of these features improves the performance of a previous framework, particularly in terms of precision.","PeriodicalId":250753,"journal":{"name":"MAED '14","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122146568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MAED '14Pub Date : 2014-11-07DOI: 10.1145/2661821.2661825
Roman Fedorov, P. Fraternali, M. Tagliasacchi
{"title":"Mountain Peak Identification in Visual Content Based on Coarse Digital Elevation Models","authors":"Roman Fedorov, P. Fraternali, M. Tagliasacchi","doi":"10.1145/2661821.2661825","DOIUrl":"https://doi.org/10.1145/2661821.2661825","url":null,"abstract":"We present a method for the identification of mountain peaks in geo-tagged photos. The key tenet is to perform an edge-based matching between the visual content of each photo and a terrain view synthesized from a Digital Elevation Model (DEM). The latter is generated as if a virtual observer is located at the coordinates indicated by the geo-tag. The key property of the method is the ability to reach a highly accurate estimation of the position of mountain peaks with a coarse resolution DEM available in the corresponding geographical area, which is sampled at a spatial resolution between 30m and 90m. This is the case for publicly available DEMs that cover almost the totality of the Earth surface (such as SRTM CGIAR and ASTER GDEM). The method is fully unsupervised, thus it can be applied to the analysis of massive amounts of user generated content available, e.g., on Flickr and Panoramio. We evaluated our method on a dataset of manually annotated images of mountain landscapes, containing peaks of the Italian and Swiss Alps. Our results show that it is possible to accurately identify the peaks in 75.0% of the cases. This result increases to 81.6% when considering only photos with mountain slopes far from the observer.","PeriodicalId":250753,"journal":{"name":"MAED '14","volume":"03 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129271662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MAED '14Pub Date : 2014-11-07DOI: 10.1145/2661821.2661827
K. Blanc, D. Lingrand, F. Precioso
{"title":"Fish Species Recognition from Video using SVM Classifier","authors":"K. Blanc, D. Lingrand, F. Precioso","doi":"10.1145/2661821.2661827","DOIUrl":"https://doi.org/10.1145/2661821.2661827","url":null,"abstract":"To build a detailed knowledge of the biodiversity, the geographical distribution and the evolution of the alive species is essential for a sustainable development and the preservation of this biodiversity. Massive databases of underwater video surveillance have been recently made available for supporting designing algorithms targeting the identification of fishes. However these video datasets are rather poor in terms of video resolution, pretty challenging regarding both the natural phenomena to be considered such as murky water, seaweed moving the water current, etc, and the huge amount of data to be processed. We have designed a processing chain based on background segmentation, selection keypoints with an adaptive scale, description with OpponentSift and learning of each species by a binary linear Support Vector Machines classifier.\u0000 Our algorithm has been evaluated in the context of our participation to the Fish task of the LifeCLEF2014 challenge. Compared to the baseline designed by the LifeCLEF challenge organizers, our approach reaches a better precision but a worse recall. Our performances in terms of species recognition (based only on the correctly detected bounding boxes) is comparable to the baseline, but our bounding boxes are often too large and our score is so penalized. Our results are thus really encouraging.","PeriodicalId":250753,"journal":{"name":"MAED '14","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MAED '14Pub Date : 2014-11-07DOI: 10.1145/2661821.2661822
S. Palazzo, Francesca Murabito
{"title":"Fish Species Identification in Real-Life Underwater Images","authors":"S. Palazzo, Francesca Murabito","doi":"10.1145/2661821.2661822","DOIUrl":"https://doi.org/10.1145/2661821.2661822","url":null,"abstract":"Kernel descriptors consist in finite-dimensional vectors extracted from image patches and designed in such a way that the dot product approximates a nonlinear kernel, whose projection feature space would be high-dimensional. Recently, they have been successfully used for fine-gradined object recogntion, and in this work we study the application of two such descriptors, called EMK and KDES (respectively designed as a kernelized generalization of the common bag-of-words and histogram-of-gradient approaches) to the MAED 2014 Fish Classification task, consisting of about 50,000 underwater images from 10 fish species.","PeriodicalId":250753,"journal":{"name":"MAED '14","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134484022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MAED '14Pub Date : 2014-11-07DOI: 10.1145/2661821.2661823
L. Fortuna, Silvia Nunnari, A. Gallo
{"title":"A Typical Day Based Approach To Detrend Solar Radiation Time Series","authors":"L. Fortuna, Silvia Nunnari, A. Gallo","doi":"10.1145/2661821.2661823","DOIUrl":"https://doi.org/10.1145/2661821.2661823","url":null,"abstract":"In this paper we propose a technique for the identification of the deterministic hourly average component of solar radiation time series during a whole year, based on data measured at a given site of interest. The proposed technique is based on the identification of the so-called typical day model and on how its parameters vary throughout the year. The technique is illustrated step by step by an appropriate case study consisting on identification of the solar radiation model at the Aberdeen (Ohio, USA) recording station. The goodness of the identified model is objectively assessed by using a set of global performance indexes including Bias, MAE, RMSE, index of agreement and true-predicted correlation coefficient. Furthermore the possibility of using the identified model as a prediction model is considered and its performances are assessed by an appropriate set of indices capable to measure its capabilities to correctly predict the solar radiation episodes overcoming a prefixed threshold. Results obtained through the reported case study shows the goodness of the proposed approach.","PeriodicalId":250753,"journal":{"name":"MAED '14","volume":"357 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133810745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}