Abdullah Alfarrarjeh, S. H. Kim, V. Hegde, Akshansh, C. Shahabi, Q. Xie, S. Ravada
{"title":"A Class of R*-tree Indexes for Spatial-Visual Search of Geo-tagged Street Images","authors":"Abdullah Alfarrarjeh, S. H. Kim, V. Hegde, Akshansh, C. Shahabi, Q. Xie, S. Ravada","doi":"10.1109/ICDE48307.2020.00221","DOIUrl":null,"url":null,"abstract":"Due to the prevalence of GPS-equipped cameras (e.g., smartphones and surveillance cameras), massive amounts of geo-tagged images capturing urban streets are increasingly being collected. Consequently, many smart city applications have emerged, relying on efficient image search. Such searches include spatial-visual queries in which spatial and visual properties are used in tandem to retrieve similar images to a given query image within a given geographical region. Towards this end, new index structures that organize images based on both spatial and visual properties are needed to efficiently execute such queries. Based on our observation that street images are typically similar in the same spatial locality, index structures for spatial-visual queries can be effectively built on a spatial index (i.e., R*-tree). Therefore, we propose a class of R*-tree indexes, particularly, by associating each node with two separate minimum bounding rectangles (MBR), one for spatial and the other for (dimension-reduced) visual properties of their contained images, and adapting the R*-tree optimization criteria to both property types.","PeriodicalId":6709,"journal":{"name":"2020 IEEE 36th International Conference on Data Engineering (ICDE)","volume":"49 1","pages":"1990-1993"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 36th International Conference on Data Engineering (ICDE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDE48307.2020.00221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Due to the prevalence of GPS-equipped cameras (e.g., smartphones and surveillance cameras), massive amounts of geo-tagged images capturing urban streets are increasingly being collected. Consequently, many smart city applications have emerged, relying on efficient image search. Such searches include spatial-visual queries in which spatial and visual properties are used in tandem to retrieve similar images to a given query image within a given geographical region. Towards this end, new index structures that organize images based on both spatial and visual properties are needed to efficiently execute such queries. Based on our observation that street images are typically similar in the same spatial locality, index structures for spatial-visual queries can be effectively built on a spatial index (i.e., R*-tree). Therefore, we propose a class of R*-tree indexes, particularly, by associating each node with two separate minimum bounding rectangles (MBR), one for spatial and the other for (dimension-reduced) visual properties of their contained images, and adapting the R*-tree optimization criteria to both property types.