{"title":"Semantic and Visual Cues for Humanitarian Computing of Natural Disaster Damage Images","authors":"H. Jomaa, Yara Rizk, M. Awad","doi":"10.1109/SITIS.2016.70","DOIUrl":null,"url":null,"abstract":"Identifying different types of damage is very essential in times of natural disasters, where first responders are flooding the internet with often annotated images and texts, and rescue teams are overwhelmed to prioritize often scarce resources. While most of the efforts in such humanitarian situations rely heavily on human labor and input, we propose in this paper a novel hybrid approach to help automate more humanitarian computing. Our framework merges low-level visual features that extract color, shape and texture along with a semantic attribute that is obtained after comparing the picture annotation to some bag of words. These visual and textual features were trained and tested on a dataset gathered from the SUN database and some Google Images. The best accuracy obtained using low-level features alone is 91.3 %, while appending the semantic attributes to it raised the accuracy to 95.5% using linear SVM and 5-Fold cross-validation which motivates an updated folk statement \"an ANNOTATED image is worth a thousand word \".","PeriodicalId":403704,"journal":{"name":"2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"106 4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SITIS.2016.70","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Identifying different types of damage is very essential in times of natural disasters, where first responders are flooding the internet with often annotated images and texts, and rescue teams are overwhelmed to prioritize often scarce resources. While most of the efforts in such humanitarian situations rely heavily on human labor and input, we propose in this paper a novel hybrid approach to help automate more humanitarian computing. Our framework merges low-level visual features that extract color, shape and texture along with a semantic attribute that is obtained after comparing the picture annotation to some bag of words. These visual and textual features were trained and tested on a dataset gathered from the SUN database and some Google Images. The best accuracy obtained using low-level features alone is 91.3 %, while appending the semantic attributes to it raised the accuracy to 95.5% using linear SVM and 5-Fold cross-validation which motivates an updated folk statement "an ANNOTATED image is worth a thousand word ".