Automated Information Extraction in Media Production最新文献

筛选
英文 中文
Session details: Media content structuring and indexing 会话细节:媒体内容结构和索引
Automated Information Extraction in Media Production Pub Date : 2011-12-01 DOI: 10.1145/3256249
A. Messina
{"title":"Session details: Media content structuring and indexing","authors":"A. Messina","doi":"10.1145/3256249","DOIUrl":"https://doi.org/10.1145/3256249","url":null,"abstract":"","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126669266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Produce. annotate. archive. repurpose --: accelerating the composition and metadata accumulation of tv content 生产。注释。档案。Repurpose——加速电视内容的构成和元数据积累
Automated Information Extraction in Media Production Pub Date : 2011-12-01 DOI: 10.1145/2072552.2072560
R. Knauf, Jens Kürsten, Albrecht Kurze, M. Ritter, Arne Berger, Stephan Heinich, Maximilian Eibl
{"title":"Produce. annotate. archive. repurpose --: accelerating the composition and metadata accumulation of tv content","authors":"R. Knauf, Jens Kürsten, Albrecht Kurze, M. Ritter, Arne Berger, Stephan Heinich, Maximilian Eibl","doi":"10.1145/2072552.2072560","DOIUrl":"https://doi.org/10.1145/2072552.2072560","url":null,"abstract":"Supporting most aspects of a media provider's real workflows such as production, distribution, content description, archiving, and re-use of video items, we developed a holistic framework to solve issues such as lack of human resources, necessity of parallel media distribution, and retrieving previously archived content through editors or consumers.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122893541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Sequence-based kernels for online concept detection in video 基于序列的视频在线概念检测方法
Automated Information Extraction in Media Production Pub Date : 2011-12-01 DOI: 10.1145/2072552.2072554
W. Bailer
{"title":"Sequence-based kernels for online concept detection in video","authors":"W. Bailer","doi":"10.1145/2072552.2072554","DOIUrl":"https://doi.org/10.1145/2072552.2072554","url":null,"abstract":"Kernel methods, e.g. Support Vector Machines, have been successfully applied to classification problems such as concept detection in video. In order to capture concepts and events with longer temporal extent, kernels for sequences of feature vectors have been proposed, e.g. based on temporal pyramid matching or sequence alignment. However, all these approaches need a temporal segmentation of the video, as the kernel is applied to the feature vectors of a segment. In (semi-)supervised training, this is not a problem, as the ground truth is annotated on a temporal segment. When performing online concept detection on a live video stream, (i) no segmentation exists and (ii) the latency must be kept as low as possible. Re-evaluating the kernel for each temporal position of a sliding window is prohibitive due to the computational effort. We thus propose variants of the temporal pyramid matching, all subsequences and longest common subsequence kernels, which can be efficiently calculated for a temporal sliding window. An arbitrary kernel function can be plugged in to determine the similarity of feature vectors of individual samples. We evaluate the proposed kernels on the TRECVID 2007 High-level Feature Extraction data set and show that the sliding window variants for online detection perform equally well or better than the segment-based ones, while the runtime is reduced by at least 30%.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130615154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
From audio recurrences to TV program structuring 从音频递归到电视节目结构
Automated Information Extraction in Media Production Pub Date : 2011-12-01 DOI: 10.1145/2072552.2072556
Alina Elma Abduraman, Sid-Ahmed Berrani, J. Rault, Olivier Le Blouch
{"title":"From audio recurrences to TV program structuring","authors":"Alina Elma Abduraman, Sid-Ahmed Berrani, J. Rault, Olivier Le Blouch","doi":"10.1145/2072552.2072556","DOIUrl":"https://doi.org/10.1145/2072552.2072556","url":null,"abstract":"This paper addresses the problem of unsupervised detection of recurrent audio segments in TV programs toward program structuring. Recurrent segments are the key elements in the process of program structuring. This allows a direct and non linear access to the main parts inside a program. It hence facilitates browsing within a recorded TV program or a program that is available on a TV-on-Demand service. Our work focuses on programs like entertainments, shows, magazines, news... It proposes an audio recurrence detection method. This is either applied over an episode or a set of episodes of the same program. Different types of audio descriptors are proposed and evaluated over an 65-hours video dataset corresponding to 112 episodes of TV programs.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"22 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134067208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
News story clustering from both what and how aspects: using bag of word model and affinity propagation 从“什么”和“怎么”两个方面对新闻故事进行聚类:利用词包模型和亲和力传播
Automated Information Extraction in Media Production Pub Date : 2011-12-01 DOI: 10.1145/2072552.2072555
W. Chu, Chao-Chin Huang, Wen-Fang Cheng
{"title":"News story clustering from both what and how aspects: using bag of word model and affinity propagation","authors":"W. Chu, Chao-Chin Huang, Wen-Fang Cheng","doi":"10.1145/2072552.2072555","DOIUrl":"https://doi.org/10.1145/2072552.2072555","url":null,"abstract":"The 24-hour news TV channels repeat the same news stories again and again. In this paper we cluster hundreds of news stories broadcasted in a day into dozens of clusters according to topics, and thus facilitate efficient browsing and summarization. The proposed system automatically removes commercial breaks and detects anchorpersons, and then determines boundaries of news stories. Semantic concepts, the bag of visual word model and the bag of trajectory model are used to describe what and how objects present in news stories. After measuring similarity between stories by the earth mover's distance, the affinity propagation algorithm is utilized to cluster stories of the same topic together. The experimental results show that with the proposed methods sophisticated news stories can be effectively clustered.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114590540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Picture-in-picture copy detection using spatial coding techniques 使用空间编码技术的图中图复制检测
Automated Information Extraction in Media Production Pub Date : 2011-12-01 DOI: 10.1145/2072552.2072559
S. Purushotham, Q. Tian, C.-C. Jay Kuo
{"title":"Picture-in-picture copy detection using spatial coding techniques","authors":"S. Purushotham, Q. Tian, C.-C. Jay Kuo","doi":"10.1145/2072552.2072559","DOIUrl":"https://doi.org/10.1145/2072552.2072559","url":null,"abstract":"Picture-in-Picture (PiP) is a special video transformation where one or more videos is scaled and spatially embedded in a host video. PiP is a very useful service to watch two or more videos simultaneously, however it can be exploited to visually hide one video inside another video. Today's copy detection techniques can be easily fooled by PiP, which is reflected in the poor results in the yearly TRECVID competitions. Inspired by the promise of spatial coding in partial image matching, we propose a generalized spatial coding representation in which both the relative position and relative orientation is embedded in the spatial code. In this paper, we will provide novel formulation for spatial verification problem and introduce polynomial and non-polynomial algorithms to efficiently address the spatial verification problem. Our initial experiment results on TRECVID and MSRA datasets shows that our proposed spatial verification algorithms provide around 20% improvement over the classical hierarchical bag-of-words approach.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125986525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Session details: 2: Media production and retrieval systems and applications 会议详细内容:2:媒体制作和检索系统及应用
Automated Information Extraction in Media Production Pub Date : 2011-12-01 DOI: 10.1145/3256250
Sid-Ahmed Berrani
{"title":"Session details: 2: Media production and retrieval systems and applications","authors":"Sid-Ahmed Berrani","doi":"10.1145/3256250","DOIUrl":"https://doi.org/10.1145/3256250","url":null,"abstract":"","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124126972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technologies for next-generation multi-media libraries: the contentus project 下一代多媒体图书馆的技术:contentus项目
Automated Information Extraction in Media Production Pub Date : 2010-10-29 DOI: 10.1145/1877850.1877852
A. Heß
{"title":"Technologies for next-generation multi-media libraries: the contentus project","authors":"A. Heß","doi":"10.1145/1877850.1877852","DOIUrl":"https://doi.org/10.1145/1877850.1877852","url":null,"abstract":"An ever-growing amount of digitized content as well as \"born-digital\" content published online forces libraries and archives to integrate new data sources and align them with their existing collections. This is a challenging task, since multimedia content is inherently diverse and integrating metadata is often complex due to the different structures, qualities and reliabilities of available sources. The CONTENTUS project investigates new solutions for libraries and multimedia archives.\u0000 In this talk, I will present our approach towards an integrated solution for libraries, archives and other content holders that facilitate a seamless transition from raw digital data to a semantic multimedia search environment. We aim to provide a complete set of tools ranging from quality control for digitization, content and metadata integration, to access through a semantic multimedia search and browsing interface.\u0000 The CONTENTUS multimedia search interface will offer integrated searches for texts, images, audio and audiovisual content in a unified semantic user interface. Search queries can be narrowed and expanded in an exploratory fashion, search results can be refined by disambiguating entities and topics, and semantic relationships become not only apparent, but can be navigated as well.\u0000 When the metadata available is not sufficient to describe the content for the purpose of the semantic search, CONTENTUS can generate the relevant metadata through a variety of content analysis techniques. These largely automated processing steps identify, among other things, named entities such as persons, places and organizations in texts, audio transcripts and audiovisual media. This makes it possible to associate content with matching authority file entries and ultimately to insert it into a growing semantic knowledge network that can then be searched or explored as described above.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124527612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic news recommendations via profiling 通过分析自动推荐新闻
Automated Information Extraction in Media Production Pub Date : 2010-10-29 DOI: 10.1145/1877850.1877863
E. Mannens, Sam Coppens, Toon De Pessemier, Hendrik Dacquin, D. V. Deursen, R. Walle
{"title":"Automatic news recommendations via profiling","authors":"E. Mannens, Sam Coppens, Toon De Pessemier, Hendrik Dacquin, D. V. Deursen, R. Walle","doi":"10.1145/1877850.1877863","DOIUrl":"https://doi.org/10.1145/1877850.1877863","url":null,"abstract":"Today, people have only limited, valuable leisure time at their hands which they want to fill in as good as possible according to their own interests, whereas broadcasters want to produce and distribute news items as fast and targeted as possible. These (developing) news stories can be characterised as dynamic, chained, and distributed events in addition to which it is important to aggregate, link, enrich, recommend, and distribute these news event items as targeted as possible to the individual, interested user. In this paper, we show how personalised recommendation and distribution of news events, described using an RDF/OWL representation of the NewsML-G2 standard, can be enabled by automatically categorising and enriching news events metadata via smart indexing and linked open datasets available on the web of data. The recommendations - based on a global, aggregated profile, which also takes into account the (dis)likings of peer friends - are finally fed to the user via a personalised RSS feed. As such, the ultimate goal is to provide an open, user-friendly recommendation platform that harnesses the end-user with a tool to access useful news event information that goes beyond basic information retrieval. At the same time, we provide the (inter)national community with standardised mechanisms to describe/distribute news event and profile information.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126226811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Generic architecture for event detection in broadcast sports video 广播体育视频事件检测的通用体系结构
Automated Information Extraction in Media Production Pub Date : 2010-10-29 DOI: 10.1145/1877850.1877865
C. Poppe, S. D. Bruyne, R. Walle
{"title":"Generic architecture for event detection in broadcast sports video","authors":"C. Poppe, S. D. Bruyne, R. Walle","doi":"10.1145/1877850.1877865","DOIUrl":"https://doi.org/10.1145/1877850.1877865","url":null,"abstract":"An increasing amount of digital sports content is generated and made available through broadcast and Internet. To deliver meaningful access for an end-user, summarizations or highlights of the content are necessary. Hence, the automatic extraction of these summarizations is a pre-requisite for efficient content delivery. In this paper, we will present an architecture that allows this automatic annotation of broadcast sports video. Sports video are particularly popular for end-users and have characteristics that can be exploited for automated analysis. However the large variations of such content (e.g., different soccer matches or even different sports) require a system that is generic or easily adaptable. As such, the focus of this paper is on the creation of a generic architecture for automated event detection in sports video. The different aspects of the architecture are explained and the systems is evaluated on different sports sequences.","PeriodicalId":280321,"journal":{"name":"Automated Information Extraction in Media Production","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121493097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信