{"title":"文字与图片相结合的博物馆信息检索","authors":"Avanish Kumar, U. Tiwary, Tanveer J. Siddiqui","doi":"10.1109/IHCI.2012.6481847","DOIUrl":null,"url":null,"abstract":"In this paper we propose the use of multilevel classification techniques similar to concept of Bayesian belief networks for Combining Words and Pictures (Images) for Museum Information Retrieval. We have designed our own corpus on Allahabad Museum. This approach is static which allows one to compute the rank of documents of relevant words and pictures with respect to some query and a given corpus. In our case, we view combining words and pictures as a task in which a training dataset of tagged pictures is provided and we need to automatically combine the query relevant words and pictures. To do this, we first describe the picture using feature vector. We do static analysis over computed features to get distinguishing feature descriptors. Maximum similarity i.e. minimum distance allows us to find the query relevant combined pictures and associated relevant words. For textual part of the query we compute the concepts (keywords as well as synonyms of each keyword in the query and their categories). Using the concept of image hierarchy, we calculate the score of each labeled document and select top five documents with its associated pictures.","PeriodicalId":107245,"journal":{"name":"2012 4th International Conference on Intelligent Human Computer Interaction (IHCI)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Combining Words and Pictures for Museum Information Retrieval\",\"authors\":\"Avanish Kumar, U. Tiwary, Tanveer J. Siddiqui\",\"doi\":\"10.1109/IHCI.2012.6481847\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we propose the use of multilevel classification techniques similar to concept of Bayesian belief networks for Combining Words and Pictures (Images) for Museum Information Retrieval. We have designed our own corpus on Allahabad Museum. This approach is static which allows one to compute the rank of documents of relevant words and pictures with respect to some query and a given corpus. In our case, we view combining words and pictures as a task in which a training dataset of tagged pictures is provided and we need to automatically combine the query relevant words and pictures. To do this, we first describe the picture using feature vector. We do static analysis over computed features to get distinguishing feature descriptors. Maximum similarity i.e. minimum distance allows us to find the query relevant combined pictures and associated relevant words. For textual part of the query we compute the concepts (keywords as well as synonyms of each keyword in the query and their categories). Using the concept of image hierarchy, we calculate the score of each labeled document and select top five documents with its associated pictures.\",\"PeriodicalId\":107245,\"journal\":{\"name\":\"2012 4th International Conference on Intelligent Human Computer Interaction (IHCI)\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 4th International Conference on Intelligent Human Computer Interaction (IHCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IHCI.2012.6481847\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 4th International Conference on Intelligent Human Computer Interaction (IHCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IHCI.2012.6481847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Combining Words and Pictures for Museum Information Retrieval
In this paper we propose the use of multilevel classification techniques similar to concept of Bayesian belief networks for Combining Words and Pictures (Images) for Museum Information Retrieval. We have designed our own corpus on Allahabad Museum. This approach is static which allows one to compute the rank of documents of relevant words and pictures with respect to some query and a given corpus. In our case, we view combining words and pictures as a task in which a training dataset of tagged pictures is provided and we need to automatically combine the query relevant words and pictures. To do this, we first describe the picture using feature vector. We do static analysis over computed features to get distinguishing feature descriptors. Maximum similarity i.e. minimum distance allows us to find the query relevant combined pictures and associated relevant words. For textual part of the query we compute the concepts (keywords as well as synonyms of each keyword in the query and their categories). Using the concept of image hierarchy, we calculate the score of each labeled document and select top five documents with its associated pictures.