{"title":"支持相似度检索的统一视频检索系统","authors":"M. Yoon, Yongik Yoon, Kio-Chung Kim","doi":"10.1109/DEXA.1999.795298","DOIUrl":null,"url":null,"abstract":"We present the unified video retrieval system (UVRS) which provides the content-based query integrating feature-based queries and annotation-based queries of indefinite formed and high-volume video data. It also supports approximate query results by using query reformulation in case the result of the query does not exist. The UVRS divides video into video documents, sequences, scenes and objects, and involves the three layered object-oriented metadata model (TOMM) to model metadata. TOMM is composed of a raw-data layer for a physical video stream, a metadata layer to support annotation-based retrieval, feature-based retrieval, and similarity retrieval and a semantic layer to reform the query. Based on this model, we present a video query language which makes possible annotation-based queries, feature-based queries based on color, spatial, temporal and spatio-temporal correlation and similar queries, and consider a video query processor (VQP). For similarity queries on a given scene or object, we present a formula expressing the degree of similarity based on color, spatial, and temporal order. If there is no query result, then it will be carry out a query reformulation process which finds possible attributes to relax the query and automatically reforms the query by using knowledge from the semantic layer. We carry out performance evaluation of similarity using recall and precision.","PeriodicalId":276867,"journal":{"name":"Proceedings. Tenth International Workshop on Database and Expert Systems Applications. DEXA 99","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Unified video retrieval system supporting similarity retrieval\",\"authors\":\"M. Yoon, Yongik Yoon, Kio-Chung Kim\",\"doi\":\"10.1109/DEXA.1999.795298\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present the unified video retrieval system (UVRS) which provides the content-based query integrating feature-based queries and annotation-based queries of indefinite formed and high-volume video data. It also supports approximate query results by using query reformulation in case the result of the query does not exist. The UVRS divides video into video documents, sequences, scenes and objects, and involves the three layered object-oriented metadata model (TOMM) to model metadata. TOMM is composed of a raw-data layer for a physical video stream, a metadata layer to support annotation-based retrieval, feature-based retrieval, and similarity retrieval and a semantic layer to reform the query. Based on this model, we present a video query language which makes possible annotation-based queries, feature-based queries based on color, spatial, temporal and spatio-temporal correlation and similar queries, and consider a video query processor (VQP). For similarity queries on a given scene or object, we present a formula expressing the degree of similarity based on color, spatial, and temporal order. If there is no query result, then it will be carry out a query reformulation process which finds possible attributes to relax the query and automatically reforms the query by using knowledge from the semantic layer. We carry out performance evaluation of similarity using recall and precision.\",\"PeriodicalId\":276867,\"journal\":{\"name\":\"Proceedings. Tenth International Workshop on Database and Expert Systems Applications. DEXA 99\",\"volume\":\"47 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Tenth International Workshop on Database and Expert Systems Applications. DEXA 99\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEXA.1999.795298\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Tenth International Workshop on Database and Expert Systems Applications. DEXA 99","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEXA.1999.795298","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unified video retrieval system supporting similarity retrieval
We present the unified video retrieval system (UVRS) which provides the content-based query integrating feature-based queries and annotation-based queries of indefinite formed and high-volume video data. It also supports approximate query results by using query reformulation in case the result of the query does not exist. The UVRS divides video into video documents, sequences, scenes and objects, and involves the three layered object-oriented metadata model (TOMM) to model metadata. TOMM is composed of a raw-data layer for a physical video stream, a metadata layer to support annotation-based retrieval, feature-based retrieval, and similarity retrieval and a semantic layer to reform the query. Based on this model, we present a video query language which makes possible annotation-based queries, feature-based queries based on color, spatial, temporal and spatio-temporal correlation and similar queries, and consider a video query processor (VQP). For similarity queries on a given scene or object, we present a formula expressing the degree of similarity based on color, spatial, and temporal order. If there is no query result, then it will be carry out a query reformulation process which finds possible attributes to relax the query and automatically reforms the query by using knowledge from the semantic layer. We carry out performance evaluation of similarity using recall and precision.