{"title":"EXTENT:融合上下文、内容和语义本体进行照片注释","authors":"E. Chang","doi":"10.1145/1160939.1160945","DOIUrl":null,"url":null,"abstract":"This architecture paper presents EXTENT, a probabilistic framework that uses influence diagrams to fuse metadata of multiple modalities for photo annotation. EXTENT fuses contextual information (location, time, and camera parameters), photo content (perceptual features), and semantic ontology in a synergistic way. It uses causal strengths to encode causalities between variables, and between variables and semantic labels. Through a landmark-recognition case study, we show that EXTENT can provide high-quality annotation, substantially better than any traditional unimodal methods.","PeriodicalId":346313,"journal":{"name":"Computer Vision meets Databases","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"EXTENT: fusing context, content, and semantic ontology for photo annotation\",\"authors\":\"E. Chang\",\"doi\":\"10.1145/1160939.1160945\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This architecture paper presents EXTENT, a probabilistic framework that uses influence diagrams to fuse metadata of multiple modalities for photo annotation. EXTENT fuses contextual information (location, time, and camera parameters), photo content (perceptual features), and semantic ontology in a synergistic way. It uses causal strengths to encode causalities between variables, and between variables and semantic labels. Through a landmark-recognition case study, we show that EXTENT can provide high-quality annotation, substantially better than any traditional unimodal methods.\",\"PeriodicalId\":346313,\"journal\":{\"name\":\"Computer Vision meets Databases\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision meets Databases\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1160939.1160945\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision meets Databases","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1160939.1160945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
EXTENT: fusing context, content, and semantic ontology for photo annotation
This architecture paper presents EXTENT, a probabilistic framework that uses influence diagrams to fuse metadata of multiple modalities for photo annotation. EXTENT fuses contextual information (location, time, and camera parameters), photo content (perceptual features), and semantic ontology in a synergistic way. It uses causal strengths to encode causalities between variables, and between variables and semantic labels. Through a landmark-recognition case study, we show that EXTENT can provide high-quality annotation, substantially better than any traditional unimodal methods.