Celso A. S. Santos, Almerindo N. Rehem Neto, T. Tavares
{"title":"使用can框架生成基于视频的应用程序","authors":"Celso A. S. Santos, Almerindo N. Rehem Neto, T. Tavares","doi":"10.1109/LAWEB.2005.45","DOIUrl":null,"url":null,"abstract":"Content annotation is motivated by the enormous quantity of digital data produced daily. Because autonomously understanding video content is an open research problem, annotations usually complement video data with descriptors that provide a synthetic representation of their content. The annotation process generates high-level metadata that are the base for organizing video repositories and later enables content-oriented video access. This paper presents a framework, called CANNOT - Coyote annotation, for supporting video annotation process. Some real applications developed by using proposed framework are also presented.","PeriodicalId":286939,"journal":{"name":"Third Latin American Web Congress (LA-WEB'2005)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Using CANNOT framework to generate video based applications\",\"authors\":\"Celso A. S. Santos, Almerindo N. Rehem Neto, T. Tavares\",\"doi\":\"10.1109/LAWEB.2005.45\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Content annotation is motivated by the enormous quantity of digital data produced daily. Because autonomously understanding video content is an open research problem, annotations usually complement video data with descriptors that provide a synthetic representation of their content. The annotation process generates high-level metadata that are the base for organizing video repositories and later enables content-oriented video access. This paper presents a framework, called CANNOT - Coyote annotation, for supporting video annotation process. Some real applications developed by using proposed framework are also presented.\",\"PeriodicalId\":286939,\"journal\":{\"name\":\"Third Latin American Web Congress (LA-WEB'2005)\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Third Latin American Web Congress (LA-WEB'2005)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/LAWEB.2005.45\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Third Latin American Web Congress (LA-WEB'2005)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LAWEB.2005.45","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using CANNOT framework to generate video based applications
Content annotation is motivated by the enormous quantity of digital data produced daily. Because autonomously understanding video content is an open research problem, annotations usually complement video data with descriptors that provide a synthetic representation of their content. The annotation process generates high-level metadata that are the base for organizing video repositories and later enables content-oriented video access. This paper presents a framework, called CANNOT - Coyote annotation, for supporting video annotation process. Some real applications developed by using proposed framework are also presented.