Intelligent Hypertext for Video Selection: A Design Approach

S. Purucker, Claus Atzenbeck, Daniel Roßner
{"title":"Intelligent Hypertext for Video Selection: A Design Approach","authors":"S. Purucker, Claus Atzenbeck, Daniel Roßner","doi":"10.1145/3345509.3349279","DOIUrl":null,"url":null,"abstract":"In this paper, we describe our project DemoMedia, a software demonstrator that combines hypertext and recommender functionality in the context of video acquisition or use. DemoMedia fills the gap that exists in today's video platforms which include recommender functionalities, but only trivial support for users to structure information. Thus, users are forced to write down notes from or about videos (needed for various reasons) on additional media, such as paper. This opens a media gap between video platform and note-taking or communication to others. DemoMedia becomes a note taking and communication tool for the user, as it offers a knowledge space on which users can freely arrange and associate information. Furthermore, its intelligent parsers compute relations that are implicitly expressed and queries knowledge bases for relevant information or related videos. Those get positioned on the space in a semantically meaningful way. DemoMedia and the underlying component-based open hypermedia system Mother combine both the machine's capability of extracting knowledge from huge amounts of data and the human capability of sensemaking, intuition, and creativity.","PeriodicalId":174017,"journal":{"name":"Proceedings of the 2nd International Workshop on Human Factors in Hypertext","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Workshop on Human Factors in Hypertext","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3345509.3349279","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In this paper, we describe our project DemoMedia, a software demonstrator that combines hypertext and recommender functionality in the context of video acquisition or use. DemoMedia fills the gap that exists in today's video platforms which include recommender functionalities, but only trivial support for users to structure information. Thus, users are forced to write down notes from or about videos (needed for various reasons) on additional media, such as paper. This opens a media gap between video platform and note-taking or communication to others. DemoMedia becomes a note taking and communication tool for the user, as it offers a knowledge space on which users can freely arrange and associate information. Furthermore, its intelligent parsers compute relations that are implicitly expressed and queries knowledge bases for relevant information or related videos. Those get positioned on the space in a semantically meaningful way. DemoMedia and the underlying component-based open hypermedia system Mother combine both the machine's capability of extracting knowledge from huge amounts of data and the human capability of sensemaking, intuition, and creativity.
智能超文本视频选择:一种设计方法
在本文中,我们描述了我们的项目DemoMedia,这是一个在视频采集或使用背景下结合超文本和推荐功能的软件演示。DemoMedia填补了目前视频平台存在的空白,包括推荐功能,但对用户构建信息的支持微不足道。因此,用户被迫在额外的媒体(如纸张)上写下关于视频的笔记(出于各种原因需要)。这就在视频平台和笔记或与他人交流之间打开了一个媒介鸿沟。DemoMedia成为用户的笔记和交流工具,因为它提供了一个知识空间,用户可以在上面自由地安排和关联信息。此外,它的智能解析器计算隐式表达的关系,并查询相关信息或相关视频的知识库。它们以一种语义上有意义的方式定位在空间上。DemoMedia和底层的基于组件的开放超媒体系统Mother结合了机器从大量数据中提取知识的能力和人类的语义、直觉和创造力的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信