Time-ART:在探索性分析的早期阶段对多媒体数据进行分割和注释的工具

Yasuhiro Yamamoto, Atsushi Aoki, K. Nakakoji
{"title":"Time-ART:在探索性分析的早期阶段对多媒体数据进行分割和注释的工具","authors":"Yasuhiro Yamamoto, Atsushi Aoki, K. Nakakoji","doi":"10.1145/634067.634136","DOIUrl":null,"url":null,"abstract":"Time-ART is a tool that helps a user in conducting empirical multimedia(video/sound) data analysis as an exploratory iterative process. Time-ART helps a user in (1) identifying seemingly interesting parts, (2) annotating them both textually and visually by positioning them in a 2D space, and (3) producing a summary report. The system consists of Movie/SoundEditor to segment a part of a movie/sound, ElementSpace, which is a free 2D space where a user can position segmented parts as objects, a TrackListController that synchronously plays multiple sound/video data, AnnotationEditor with which a user can textually annotate each positioned object, DocumentViewer that automatically compiles positioned parts and their annotations in the space, ViewFinder that provides a 3D view of ElementSpace allowing a user to use different \"depth\" as layers to classify positioned objects, and TimeChart that is another 3D view of ElementSpace helping a user understand the location of each segmented part in terms of the original movie/sound.","PeriodicalId":351792,"journal":{"name":"CHI '01 Extended Abstracts on Human Factors in Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Time-ART: a tool for segmenting and annotating multimedia data in early stages of exploratory analysis\",\"authors\":\"Yasuhiro Yamamoto, Atsushi Aoki, K. Nakakoji\",\"doi\":\"10.1145/634067.634136\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Time-ART is a tool that helps a user in conducting empirical multimedia(video/sound) data analysis as an exploratory iterative process. Time-ART helps a user in (1) identifying seemingly interesting parts, (2) annotating them both textually and visually by positioning them in a 2D space, and (3) producing a summary report. The system consists of Movie/SoundEditor to segment a part of a movie/sound, ElementSpace, which is a free 2D space where a user can position segmented parts as objects, a TrackListController that synchronously plays multiple sound/video data, AnnotationEditor with which a user can textually annotate each positioned object, DocumentViewer that automatically compiles positioned parts and their annotations in the space, ViewFinder that provides a 3D view of ElementSpace allowing a user to use different \\\"depth\\\" as layers to classify positioned objects, and TimeChart that is another 3D view of ElementSpace helping a user understand the location of each segmented part in terms of the original movie/sound.\",\"PeriodicalId\":351792,\"journal\":{\"name\":\"CHI '01 Extended Abstracts on Human Factors in Computing Systems\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CHI '01 Extended Abstracts on Human Factors in Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/634067.634136\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CHI '01 Extended Abstracts on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/634067.634136","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

Time-ART是一种工具,可以帮助用户进行经验多媒体(视频/声音)数据分析,作为一种探索性的迭代过程。Time-ART可以帮助用户(1)识别看似有趣的部分,(2)通过将它们定位在2D空间中,在文本和视觉上对它们进行注释,以及(3)生成摘要报告。该系统由Movie/SoundEditor(用于分割电影/声音的一部分)、ElementSpace(用户可以将分割的部分定位为对象的自由2D空间)、TrackListController(同步播放多个声音/视频数据)、AnnotationEditor(用户可以对每个定位的对象进行文本注释)、DocumentViewer(自动编译定位的部分及其在空间中的注释)组成。ViewFinder提供了ElementSpace的3D视图,允许用户使用不同的“深度”作为层来分类定位的对象,而TimeChart是ElementSpace的另一个3D视图,帮助用户根据原始电影/声音了解每个分段部分的位置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Time-ART: a tool for segmenting and annotating multimedia data in early stages of exploratory analysis
Time-ART is a tool that helps a user in conducting empirical multimedia(video/sound) data analysis as an exploratory iterative process. Time-ART helps a user in (1) identifying seemingly interesting parts, (2) annotating them both textually and visually by positioning them in a 2D space, and (3) producing a summary report. The system consists of Movie/SoundEditor to segment a part of a movie/sound, ElementSpace, which is a free 2D space where a user can position segmented parts as objects, a TrackListController that synchronously plays multiple sound/video data, AnnotationEditor with which a user can textually annotate each positioned object, DocumentViewer that automatically compiles positioned parts and their annotations in the space, ViewFinder that provides a 3D view of ElementSpace allowing a user to use different "depth" as layers to classify positioned objects, and TimeChart that is another 3D view of ElementSpace helping a user understand the location of each segmented part in terms of the original movie/sound.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信