A framework for automatically generating composite keywords for geo-tagged street images

IF 1.2 4区 综合性期刊 Q3 MULTIDISCIPLINARY SCIENCES
Abdullah Alfarrarjeh , Seon Ho Kim , Jungwon Yoon
{"title":"A framework for automatically generating composite keywords for geo-tagged street images","authors":"Abdullah Alfarrarjeh ,&nbsp;Seon Ho Kim ,&nbsp;Jungwon Yoon","doi":"10.1016/j.kjs.2024.100333","DOIUrl":null,"url":null,"abstract":"<div><div>Due to the ubiquity of sensor-equipped cameras such as smartphones, images are associated with spatial metadata, including camera’s geographical location and viewing orientation, which can be used for automatically generating better semantic keywords about geo-tagged urban street images in addition to visual keywords extracted from image analysis. This study introduces a novel framework for auto-tagging images that integrates both spatial and visual properties to generate comprehensive and accurate tags. The framework operates through four phases: extraction, abstraction, composition, and assessment. Our research highlights the benefits of combining visual and spatial analyses, demonstrated through a case study using geo-tagged urban street images from Orlando, Pittsburgh, and Manhattan. Experimental results show that the proposed framework significantly enhances the accuracy of keyword-based searches compared to conventional methods. In particular, based on our experiments, image search using the tags generated by our proposed framework, referred to as descriptive tags, achieved an average precision improvement factor of 0.9 compared to conventional tags. Additionally, our proposed ranking algorithm, which extends the term frequency-inverse document frequency (TF-IDF) algorithm, resulted in improvement factors of 0.86 for mean average precision (MAP) and 0.57 for mean reciprocal rank (MRR). Moreover, our framework’s flexibility and robustness make it suitable for diverse applications, from smart cities to online shopping. The paper also includes a detailed evaluation and user study, confirming the precision and reliability of the generated tags.</div></div>","PeriodicalId":17848,"journal":{"name":"Kuwait Journal of Science","volume":"52 1","pages":"Article 100333"},"PeriodicalIF":1.2000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Kuwait Journal of Science","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2307410824001585","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Due to the ubiquity of sensor-equipped cameras such as smartphones, images are associated with spatial metadata, including camera’s geographical location and viewing orientation, which can be used for automatically generating better semantic keywords about geo-tagged urban street images in addition to visual keywords extracted from image analysis. This study introduces a novel framework for auto-tagging images that integrates both spatial and visual properties to generate comprehensive and accurate tags. The framework operates through four phases: extraction, abstraction, composition, and assessment. Our research highlights the benefits of combining visual and spatial analyses, demonstrated through a case study using geo-tagged urban street images from Orlando, Pittsburgh, and Manhattan. Experimental results show that the proposed framework significantly enhances the accuracy of keyword-based searches compared to conventional methods. In particular, based on our experiments, image search using the tags generated by our proposed framework, referred to as descriptive tags, achieved an average precision improvement factor of 0.9 compared to conventional tags. Additionally, our proposed ranking algorithm, which extends the term frequency-inverse document frequency (TF-IDF) algorithm, resulted in improvement factors of 0.86 for mean average precision (MAP) and 0.57 for mean reciprocal rank (MRR). Moreover, our framework’s flexibility and robustness make it suitable for diverse applications, from smart cities to online shopping. The paper also includes a detailed evaluation and user study, confirming the precision and reliability of the generated tags.
为有地理标记的街道图像自动生成复合关键词的框架
由于智能手机等配备传感器的相机无处不在,图像与空间元数据相关联,包括相机的地理位置和观看方向,除了从图像分析中提取的视觉关键词外,这些元数据还可用于自动生成有关地理标记城市街道图像的更好的语义关键词。本研究介绍了一种用于自动标记图像的新型框架,该框架整合了空间和视觉属性,可生成全面、准确的标记。该框架通过四个阶段运行:提取、抽象、组合和评估。我们的研究强调了将视觉分析和空间分析相结合的好处,并通过使用奥兰多、匹兹堡和曼哈顿的地理标记城市街道图像进行案例研究加以证明。实验结果表明,与传统方法相比,所提出的框架大大提高了基于关键词搜索的准确性。特别是,根据我们的实验,与传统标签相比,使用我们提出的框架生成的标签(称为描述性标签)进行图像搜索的平均精确度提高了 0.9 倍。此外,我们提出的排序算法扩展了词频-反向文档频率(TF-IDF)算法,使平均精度(MAP)和平均倒数排序(MRR)分别提高了 0.86 和 0.57。此外,我们框架的灵活性和鲁棒性使其适用于从智能城市到在线购物等各种应用。论文还包括详细的评估和用户研究,证实了生成标签的精确性和可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Kuwait Journal of Science
Kuwait Journal of Science MULTIDISCIPLINARY SCIENCES-
CiteScore
1.60
自引率
28.60%
发文量
132
期刊介绍: Kuwait Journal of Science (KJS) is indexed and abstracted by major publishing houses such as Chemical Abstract, Science Citation Index, Current contents, Mathematics Abstract, Micribiological Abstracts etc. KJS publishes peer-review articles in various fields of Science including Mathematics, Computer Science, Physics, Statistics, Biology, Chemistry and Earth & Environmental Sciences. In addition, it also aims to bring the results of scientific research carried out under a variety of intellectual traditions and organizations to the attention of specialized scholarly readership. As such, the publisher expects the submission of original manuscripts which contain analysis and solutions about important theoretical, empirical and normative issues.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信