{"title":"Landmark recognition in VISITO: VIsual Support to Interactive TOurism in Tuscany","authors":"Giuseppe Amato, Paolo Bolettieri, F. Falchi","doi":"10.1145/1991996.1992057","DOIUrl":"https://doi.org/10.1145/1991996.1992057","url":null,"abstract":"We present the VIsual Support to Interactive TOurism in Tuscany (VISITO Tuscany) project which offers an interactive guide for tourists visiting cities of art accessible via smartphones. The peculiarity of the system is that user interaction is mainly obtained by the use of images -- In order to receive information on a particular monument users just have to take a picture of it. VISITO Tuscany, using techniques of image analysis and content recognition, automatically recognize the photographed monuments and pertinent information is displayed to the user. In this paper we illustrate how the use of landmarks recognition from mobile devices can provide the tourist with relevant and customized information about various type of objects in cities of art.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114427197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Majdi Rawashdeh, Heung-Nam Kim, Abdulmotaleb El Saddik
{"title":"Folksonomy-boosted social media search and ranking","authors":"Majdi Rawashdeh, Heung-Nam Kim, Abdulmotaleb El Saddik","doi":"10.1145/1991996.1992023","DOIUrl":"https://doi.org/10.1145/1991996.1992023","url":null,"abstract":"With the rapid proliferation of social media services, users on the social Web are overwhelmed by the huge amount of social media available. In this paper, we look into the potential of social tagging in social media services to help users in retrieving social media. By leveraging social tagging, we propose a new personalized search method to enhance not only retrieval accuracy but also retrieval coverage. Our approach first determines the similarities between resources and between tags. Thereafter, we build two models: a user-tag relation model that reflects how a certain user has assigned tags similar to a given tag and a tag-item relation model that captures how a certain tag has been tagged to resources similar to a given resource. We then seamlessly map the tags on the items depending on a particular user's query in order to find the most attractive media content relevant to the user needs. The experimental evaluations have shown the proposed method achieves better search results than state-of-the art algorithms in terms of accuracy and coverage.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114452864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic combination of textual and visual information in multimedia retrieval","authors":"S. Clinchant, Julien Ah-Pine, G. Csurka","doi":"10.1145/1991996.1992040","DOIUrl":"https://doi.org/10.1145/1991996.1992040","url":null,"abstract":"The goal of this paper is to introduce a set of techniques we call semantic combination in order to efficiently fuse text and image retrieval systems in the context of multimedia information access. These techniques emerge from the observation that image and textual queries are expressed at different semantic levels and that a single image query is often ambiguous. Overall, the semantic combination techniques overcome a conceptual barrier rather than a technical one: these methods can be seen as a combination of late fusion and image reranking. Albeit simple, this approach has not been used yet. We assess the proposed techniques against late and cross-media fusion using 4 different ImageCLEF datasets. Compared to late fusion, performances significantly increase on two datasets and remain similar on the two other ones.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127400656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Höferlin, Benjamin Höferlin, D. Weiskopf, G. Heidemann
{"title":"Interactive schematic summaries for exploration of surveillance video","authors":"Markus Höferlin, Benjamin Höferlin, D. Weiskopf, G. Heidemann","doi":"10.1145/1991996.1992005","DOIUrl":"https://doi.org/10.1145/1991996.1992005","url":null,"abstract":"We present a new and scalable technique to explore surveillance videos by scatter/gather browsing of trajectories of moving objects. The proposed approach facilitates interactive clustering of trajectories by an effective way of cluster visualization that we term schematic summaries. This novel visualization illustrates cluster summaries in a schematic, non-photorealistic style. To reduce visual clutter, we introduce the trajectory bundling technique. The fusion of schematic summaries and user interaction leads to efficient hierarchical exploration of video data. Examples of different browsing scenarios demonstrate the effectiveness of the proposed method.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"120 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131486923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Considerations for a touchscreen visual lifelog","authors":"Niamh Caprani, N. O’Connor, C. Gurrin","doi":"10.1145/1991996.1992063","DOIUrl":"https://doi.org/10.1145/1991996.1992063","url":null,"abstract":"In this paper we describe the design considerations for a touchscreen visual lifelog browser. Visual lifelogs are large collections of photographs which represent a person's experiences. Lifelogging devices, such as the wearable camera known as SenseCam, can record thousands of images per day. Utilizing the approach of event segmentation to organize and present these images, we have designed an interface to present lifelog collections for touchscreen interaction, thus increasing accessibility for users.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"68 3-4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116720340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Photo summary: automated selection of representative photos from a digital collection","authors":"S. Nowak, Ronny Paduschek, Uwe Kühhirt","doi":"10.1145/1991996.1992071","DOIUrl":"https://doi.org/10.1145/1991996.1992071","url":null,"abstract":"This work presents a showcase on automated selection of photos from a digital photo collection. The Photo Summary technology considers content-based information and photo metadata to determine the most relevant photos in a given collection. The key contribution is the rating scheme for relevance which is based on criteria such as the diversity of photos, the importance of the photo motifs, the technical quality and aesthetics of photos and the interdependence of photos concerning the represented events. The summarization system further considers user preferences and visualizes the selected photos as event staples.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115781285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient clustering and quantisation of SIFT features: exploiting characteristics of the SIFT descriptor and interest region detectors under image inversion","authors":"Jonathon S. Hare, Sina Samangooei, P. Lewis","doi":"10.1145/1991996.1991998","DOIUrl":"https://doi.org/10.1145/1991996.1991998","url":null,"abstract":"The SIFT keypoint descriptor is a powerful approach to encoding local image description using edge orientation histograms. Through codebook construction via k-means clustering and quantisation of SIFT features we can achieve image retrieval treating images as bags-of-words. Intensity inversion of images results in distinct SIFT features for a single local image patch across the two images. Intensity inversions notwithstanding these two patches are structurally identical. Through careful reordering of the SIFT feature vectors, we can construct the SIFT feature that would have been generated from a non-inverted image patch starting with those extracted from an inverted image patch. Furthermore, through examination of the local feature detection stage, we can estimate whether a given SIFT feature belongs in the space of inverted features, or non-inverted features. Therefore we can consistently separate the space of SIFT features into two distinct subspaces. With this knowledge, we can demonstrate reduced time complexity of codebook construction via clustering by up to a factor of four and also reduce the memory consumption of the clustering algorithms while producing equivalent retrieval results.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124794128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image and video browsing with a cylindrical 3D storyboard","authors":"Klaus Schöffmann, L. Böszörményi","doi":"10.1145/1991996.1992059","DOIUrl":"https://doi.org/10.1145/1991996.1992059","url":null,"abstract":"We demonstrate an interactive 3D storyboard that take advantage of 3D graphics in order to overcome certain limitations of conventional 2D storyboards when used for the task of image and video browsing.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122413298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yannis Kalantidis, Lluis Garcia Pueyo, Michele Trevisiol, R. V. Zwol, Yannis Avrithis
{"title":"Scalable triangulation-based logo recognition","authors":"Yannis Kalantidis, Lluis Garcia Pueyo, Michele Trevisiol, R. V. Zwol, Yannis Avrithis","doi":"10.1145/1991996.1992016","DOIUrl":"https://doi.org/10.1145/1991996.1992016","url":null,"abstract":"We propose a scalable logo recognition approach that extends the common bag-of-words model and incorporates local geometry in the indexing process. Given a query image and a large logo database, the goal is to recognize the logo contained in the query, if any. We locally group features in triples using multi-scale Delaunay triangulation and represent triangles by signatures capturing both visual appearance and local geometry. Each class is represented by the union of such signatures over all instances in the class. We see large scale recognition as a sub-linear search problem where signatures of the query image are looked up in an inverted index structure of the class models. We evaluate our approach on a large-scale logo recognition dataset with more than four thousand classes.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127225721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samir Amir, Y. Benabbas, Ioan Marius Bilasco, C. Djeraba
{"title":"MuMIe: a new system for multimedia metadata interoperability","authors":"Samir Amir, Y. Benabbas, Ioan Marius Bilasco, C. Djeraba","doi":"10.1145/1991996.1991997","DOIUrl":"https://doi.org/10.1145/1991996.1991997","url":null,"abstract":"The recent growth of multimedia requires an extensive use of metadata for their management. However, a uniform access to metadata is necessary in order to take advantage of them. In this context, several techniques for achieving metadata interoperability have been developed. Most of these techniques focus on matching schemas defined by using one schema description language. The few existing matching systems that support schemas from different languages present some limitations. In this paper we present a new integration system supporting schemas from different description languages. Moreover, the proposed matching process makes use of several types of information (linguistic, semantic and structural) in a manner that increases the matching accuracy.","PeriodicalId":390933,"journal":{"name":"Proceedings of the 1st ACM International Conference on Multimedia Retrieval","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127777494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}