MAED '12最新文献

筛选
英文 中文
An environmental search engine based on interactive visual classification 基于交互式视觉分类的环境搜索引擎
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390845
S. Vrochidis, H. Bosch, A. Moumtzidou, Florian Heimerl, T. Ertl, Y. Kompatsiaris
{"title":"An environmental search engine based on interactive visual classification","authors":"S. Vrochidis, H. Bosch, A. Moumtzidou, Florian Heimerl, T. Ertl, Y. Kompatsiaris","doi":"10.1145/2390832.2390845","DOIUrl":"https://doi.org/10.1145/2390832.2390845","url":null,"abstract":"Environmental conditions play a very important role in human life. Nowadays, environmental data and measurements are freely made available through dedicated web sites, services and portals. This work deals with the problem of discovering such web resources by proposing an interactive domain-specific search engine, which is built on top of a general purpose search engine, employing supervised machine learning and advanced interactive visualization techniques. Our experiments and the evaluation show that interactive classification based on visualization improves the performance of the system.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114455194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Plant leaves morphological categorization with shared nearest neighbours clustering 植物叶片形态分类与共享近邻聚类
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390842
Amel Hamzaoui, A. Joly, H. Goëau
{"title":"Plant leaves morphological categorization with shared nearest neighbours clustering","authors":"Amel Hamzaoui, A. Joly, H. Goëau","doi":"10.1145/2390832.2390842","DOIUrl":"https://doi.org/10.1145/2390832.2390842","url":null,"abstract":"This paper presents an original experiment aimed at evaluating if state-of-the-art visual clustering techniques are able to automatically recover morphological classifications built by the botanists themselves. The clustering phase is based on a recent Shared-Nearest Neighbours (SNN) clustering algorithm, which allows to combine effectively heterogeneous visual information sources at the category level. Each resulting cluster is associated with an optimal selection of visual similarities, allowing to discover diverse and meaningful morphological categories even if we use a blind set of visual sources as input. Experiments are performed on ImageCLEF 2011 plant identification dataset, that was specifically enriched in this work with morphological attributes tags (annotated by expert botanists). The results are very promising, since all clusters discovered automatically can be easily matched to one node of the morphological tree built by the botanists.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128644056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Event detection in underwater domain by exploiting fish trajectory clustering 基于鱼群轨迹聚类的水下事件检测
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390840
S. Palazzo, C. Spampinato, Cigdem Beyan
{"title":"Event detection in underwater domain by exploiting fish trajectory clustering","authors":"S. Palazzo, C. Spampinato, Cigdem Beyan","doi":"10.1145/2390832.2390840","DOIUrl":"https://doi.org/10.1145/2390832.2390840","url":null,"abstract":"In this paper we propose a clustering-based approach for the analysis of fish trajectories in real-life unconstrained underwater videos, with the purpose of detecting behavioural events; in such a context, both video quality limitations and the motion properties of the targets make the trajectory analysis task for event detection extremely difficult. Our approach is based on the k-means clustering algorithm and allows to group similar trajectories together, thus providing a simple way to detect the most used paths and the most visited areas, and, by contrast, to identify trajectories which do not fall into any common clusters, therefore representing unusual behaviours. Our results show that the proposed approach is able to separate trajectory patterns and to identify those matching predefined behaviours or which are more likely to be associated to new/anomalous behaviours.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127292970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Texture recognition for frog identification 纹理识别用于青蛙识别
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390839
F. Cannavò, G. Nunnari, I. Kale, F. Tek
{"title":"Texture recognition for frog identification","authors":"F. Cannavò, G. Nunnari, I. Kale, F. Tek","doi":"10.1145/2390832.2390839","DOIUrl":"https://doi.org/10.1145/2390832.2390839","url":null,"abstract":"This paper describes a visual processing technique for automatic frog (Xenopus Laevis sp.) localization and identification. The problem of frog identification is to process and classify an unknown frog image to determine the identity which is recorded previously on an image database. The frog skin pattern (i.e. texture) provides a unique feature for identification. Hence, the study investigates three different kind of features (i.e. Gabor filters, granulometry, threshold set compactness) to extract texture information. The classifier is built on nearest neighbor principle; it assigns the query feature to the database feature which has the minimum distance. Hence, the study investigates different distance measures and compares their performance. The detailed results show that the most successful feature and distance measure is granulometry and weighted L1 norm for the frog identification using skin texture features.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121851981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Quantitative performance analysis of object detection algorithms on underwater video footage 水下视频中目标检测算法的定量性能分析
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390847
I. Kavasidis, S. Palazzo
{"title":"Quantitative performance analysis of object detection algorithms on underwater video footage","authors":"I. Kavasidis, S. Palazzo","doi":"10.1145/2390832.2390847","DOIUrl":"https://doi.org/10.1145/2390832.2390847","url":null,"abstract":"Object detection in underwater unconstrained environments is useful in domains like marine biology and geology, where the scientists need to study fish populations, underwater geological events etc. However, in literature, very little can be found regarding fish detection in unconstrained underwater videos. Nevertheless, the unconstrained underwater video domain constitutes a perfect soil for bringing state-of-the-art object detection algorithms to their limits because of the nature of the scenes, which often present with a number of intrinsic difficulties (e.g. multi-modal backgrounds, complex textures and color patterns, ever-changing illumination etc..).\u0000 In this paper, we evaluated the performance of six state-of-the-art object detection algorithms in the task of fish detection in unconstrained, underwater video footage, discussing the properties of each of them and giving a detailed report of the achieved performance.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127540989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Semantic based retrieval system of arctic animal images 基于语义的北极动物图像检索系统
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390844
Giuseppe Santoro, C. Pino, D. Giordano
{"title":"Semantic based retrieval system of arctic animal images","authors":"Giuseppe Santoro, C. Pino, D. Giordano","doi":"10.1145/2390832.2390844","DOIUrl":"https://doi.org/10.1145/2390832.2390844","url":null,"abstract":"In this paper we propose a semantic based image retrieval system in the domain of arctic animals. The proposed system exploits a semantic engine capable of adapting the processing steps both to the users' need and to the arctic image domain. This exibility has been achieved by three main steps: 1) arctic domain ontology modeling, 2) identification of features peculiar of the images we are dealing with and, 3)interface composition to support user interaction and search customization. The performance of the proposed system was tested using a set of 200 images depicting wild animals living in polar environment while users performed different search tasks specifying different constraints through the user interface. The results show both the accuracy of the retrieved results and the exibility to the users' constraints of the proposed system.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A visual sensing platform for creating a smarter multi-modal marine monitoring network 一个视觉传感平台,用于创建一个更智能的多模式海洋监测网络
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390846
Dian Zhang, Edel O'Connor, Kevin McGuinness, N. O’Connor, F. Regan, A. Smeaton
{"title":"A visual sensing platform for creating a smarter multi-modal marine monitoring network","authors":"Dian Zhang, Edel O'Connor, Kevin McGuinness, N. O’Connor, F. Regan, A. Smeaton","doi":"10.1145/2390832.2390846","DOIUrl":"https://doi.org/10.1145/2390832.2390846","url":null,"abstract":"Demands from various scientific and management communities along with legislative requirements at national and international levels have led to a need for innovative research into large-scale, low-cost, reliable monitoring of our marine and freshwater environments. In this paper we demonstrate the benefits of a multi-modal approach to monitoring and how an in-situ sensor network can be enhanced with the use of contextual image data. We provide an outline of the deployment of a visual sensing system at a busy port and the need for monitoring shipping traffic at the port. Subsequently we present an approach for detecting ships in a challenging image dataset and discuss how this can help to create an intelligent marine monitoring network.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"42 28","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120865377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Environmental data extraction from multimedia resources 从多媒体资源中提取环境数据
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390836
A. Moumtzidou, Victor Epitropou, S. Vrochidis, Sascha Voth, Anastasios Bassoukos, K. Karatzas, J. Moßgraber, Y. Kompatsiaris, A. Karppinen, J. Kukkonen
{"title":"Environmental data extraction from multimedia resources","authors":"A. Moumtzidou, Victor Epitropou, S. Vrochidis, Sascha Voth, Anastasios Bassoukos, K. Karatzas, J. Moßgraber, Y. Kompatsiaris, A. Karppinen, J. Kukkonen","doi":"10.1145/2390832.2390836","DOIUrl":"https://doi.org/10.1145/2390832.2390836","url":null,"abstract":"Extraction and analysis of environmental information is very important, since it strongly affects everyday life. Nowadays there are already many free services providing environmental information in several formats including multimedia (e.g. map images). Although such presentation formats might be very informative for humans, they complicate the automatic extraction and processing of the underlying data. A characteristic example is the air quality and pollen forecasts, which are usually encoded in image maps, while the initial (numerical) pollutant concentrations remain unavailable. This work proposes a framework for the semi-automatic extraction of such information based on a template configuration tool, on Optical Character Recognition (OCR) techniques and on methodologies for data reconstruction from images. The system is tested with a different air quality and pollen forecast heatmaps demonstrating promising results.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129915495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Multi-organ plant identification 多器官植物鉴定
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390843
H. Goëau, P. Bonnet, Julien Barbe, V. Bakic, A. Joly, J. Molino, D. Barthélémy, N. Boujemaa
{"title":"Multi-organ plant identification","authors":"H. Goëau, P. Bonnet, Julien Barbe, V. Bakic, A. Joly, J. Molino, D. Barthélémy, N. Boujemaa","doi":"10.1145/2390832.2390843","DOIUrl":"https://doi.org/10.1145/2390832.2390843","url":null,"abstract":"This paper presents a new interactive web application for the visual identification of plants based on collaborative pictures. Contrary to previous content-based identification methods and systems developed for plants that mainly relied on leaves, or in few other cases on flowers, it makes use of five different organs and plant's views including habit, flowers, fruits, leaves and bark. Thanks to an interactive and visual query widget, the tagging process of the different organs and views is as simple as drag-and-drop operations and does not require any expertise in botany. All training pictures used by the system were continuously collected during one year through a crowdsourcing application that was set up in the scope of a citizen sciences initiative. System-oriented and human-centered evaluations of the application show that the results are already satisfactory and therefore very promising in the long term to identify a richer flora.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133159913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Grass, scrub, trees and random forest 草地,灌木,树木和随机森林
MAED '12 Pub Date : 2012-11-02 DOI: 10.1145/2390832.2390834
M. Torres, G. Qiu
{"title":"Grass, scrub, trees and random forest","authors":"M. Torres, G. Qiu","doi":"10.1145/2390832.2390834","DOIUrl":"https://doi.org/10.1145/2390832.2390834","url":null,"abstract":"Habitat classification is important for monitoring the environment and biodiversity. Currently, this is done manually by human surveyors, a laborious, expensive and subjective process. We have developed a new computer habitat classification method based on automatically tagging geo-referenced ground photographs. In this paper, we present a geo-referenced habitat image database containing over 400 high-resolution ground photographs that have been manually annotated by experts based on a hierarchical habitat classification scheme widely used by ecologists. This will be the first publicly available image database specifically designed for the development of multimedia analysis techniques for ecological (habitat classification) applications. We formulate photograph-based habitat classification as an automatic image tagging problem and we have developed a novel random-forest based method for annotating an image with the habitat categories it contains. We have also developed an efficient and fast random-projection based technique for constructing the random forest. We present experimental results to show that ground-taken photographs are a potential source of information that can be exploited in automatic habitat classification and that our approach is able to classify with a reasonable degree of confidence three of the main habitat classes: Woodland and Scrub, Grassland and Marsh and Miscellaneous.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133306212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信