BNoteHelper:基于笔记的提纲生成工具,用于视频共享平台上的结构化学习

IF 2.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Fangyu Yu, Peng Zhang, Xianghua Ding, Tun Lu, Ning Gu
{"title":"BNoteHelper:基于笔记的提纲生成工具,用于视频共享平台上的结构化学习","authors":"Fangyu Yu, Peng Zhang, Xianghua Ding, Tun Lu, Ning Gu","doi":"10.1145/3638775","DOIUrl":null,"url":null,"abstract":"<p>Usually generated by ordinary users and often not particularly designed for learning, the videos on video sharing platforms are mostly not structured enough to support learning purposes, although they are increasingly leveraged for that. Most existing studies attempt to structure the video using video summarization techniques. However, these methods focus on extracting information from within the video and aiming to consume the video itself. In this paper, we design and implement BNoteHelper, a note-based video outline prototype which generates outline titles by extracting user-generated notes on Bilibili, using the BART model fine-tuned on a built dataset. As a browser plugin, BNoteHelper provides users with video overview and navigation as well as note-taking template, via two main features: outline table and navigation marker. The model and prototype are evaluated through automatic and human evaluations. The automatic evaluation reveals that, both before and after fine-tuning, the BART model outperforms T5-Pegasus in BLEU and Perplexity metrics. Also, the results from user feedback reveal that the generation outline sourced from notes is preferred by users than that sourced from video captions due to its more concise, clear, and accurate characteristics, but also too general with less details and diversities sometimes. Two features of the video outline are also found to have respective advantages specially in holistic and fine-grained aspects. Based on these results, we propose insights into designing a video summary from the user-generated creation perspective, customizing it based on video types, and strengthening the advantages of its different visual styles on video sharing platforms.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BNoteHelper: A Note-Based Outline Generation Tool for Structured Learning on Video Sharing Platforms\",\"authors\":\"Fangyu Yu, Peng Zhang, Xianghua Ding, Tun Lu, Ning Gu\",\"doi\":\"10.1145/3638775\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Usually generated by ordinary users and often not particularly designed for learning, the videos on video sharing platforms are mostly not structured enough to support learning purposes, although they are increasingly leveraged for that. Most existing studies attempt to structure the video using video summarization techniques. However, these methods focus on extracting information from within the video and aiming to consume the video itself. In this paper, we design and implement BNoteHelper, a note-based video outline prototype which generates outline titles by extracting user-generated notes on Bilibili, using the BART model fine-tuned on a built dataset. As a browser plugin, BNoteHelper provides users with video overview and navigation as well as note-taking template, via two main features: outline table and navigation marker. The model and prototype are evaluated through automatic and human evaluations. The automatic evaluation reveals that, both before and after fine-tuning, the BART model outperforms T5-Pegasus in BLEU and Perplexity metrics. Also, the results from user feedback reveal that the generation outline sourced from notes is preferred by users than that sourced from video captions due to its more concise, clear, and accurate characteristics, but also too general with less details and diversities sometimes. Two features of the video outline are also found to have respective advantages specially in holistic and fine-grained aspects. Based on these results, we propose insights into designing a video summary from the user-generated creation perspective, customizing it based on video types, and strengthening the advantages of its different visual styles on video sharing platforms.</p>\",\"PeriodicalId\":50940,\"journal\":{\"name\":\"ACM Transactions on the Web\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on the Web\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3638775\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on the Web","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3638775","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

视频共享平台上的视频通常是由普通用户生成的,而且往往不是特别为学习而设计,因此,尽管它们越来越多地被用于学习目的,但其结构大多不足以支持学习。大多数现有研究都尝试使用视频摘要技术来构建视频结构。然而,这些方法侧重于从视频中提取信息,目的是消费视频本身。在本文中,我们设计并实现了基于笔记的视频摘要原型 BNoteHelper,该原型通过提取用户在 Bilibili 上生成的笔记来生成摘要标题,并使用在构建的数据集上微调的 BART 模型。作为一个浏览器插件,BNoteHelper 通过两个主要功能:大纲表和导航标记,为用户提供视频概览和导航以及笔记模板。我们通过自动和人工评估对模型和原型进行了评估。自动评估结果显示,在微调前后,BART 模型的 BLEU 和 Perplexity 指标均优于 T5-Pegasus。此外,用户反馈结果显示,用户更喜欢来自笔记的生成大纲,因为它比来自视频字幕的生成大纲更简洁、清晰和准确,但有时也过于笼统,细节和多样性较少。此外,我们还发现视频大纲的两个特征在整体性和细粒度方面各有优势。基于这些结果,我们提出了从用户生成创作的角度设计视频概要、根据视频类型定制视频概要以及在视频共享平台上加强其不同视觉风格优势的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
BNoteHelper: A Note-Based Outline Generation Tool for Structured Learning on Video Sharing Platforms

Usually generated by ordinary users and often not particularly designed for learning, the videos on video sharing platforms are mostly not structured enough to support learning purposes, although they are increasingly leveraged for that. Most existing studies attempt to structure the video using video summarization techniques. However, these methods focus on extracting information from within the video and aiming to consume the video itself. In this paper, we design and implement BNoteHelper, a note-based video outline prototype which generates outline titles by extracting user-generated notes on Bilibili, using the BART model fine-tuned on a built dataset. As a browser plugin, BNoteHelper provides users with video overview and navigation as well as note-taking template, via two main features: outline table and navigation marker. The model and prototype are evaluated through automatic and human evaluations. The automatic evaluation reveals that, both before and after fine-tuning, the BART model outperforms T5-Pegasus in BLEU and Perplexity metrics. Also, the results from user feedback reveal that the generation outline sourced from notes is preferred by users than that sourced from video captions due to its more concise, clear, and accurate characteristics, but also too general with less details and diversities sometimes. Two features of the video outline are also found to have respective advantages specially in holistic and fine-grained aspects. Based on these results, we propose insights into designing a video summary from the user-generated creation perspective, customizing it based on video types, and strengthening the advantages of its different visual styles on video sharing platforms.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on the Web
ACM Transactions on the Web 工程技术-计算机:软件工程
CiteScore
4.90
自引率
0.00%
发文量
26
审稿时长
7.5 months
期刊介绍: Transactions on the Web (TWEB) is a journal publishing refereed articles reporting the results of research on Web content, applications, use, and related enabling technologies. Topics in the scope of TWEB include but are not limited to the following: Browsers and Web Interfaces; Electronic Commerce; Electronic Publishing; Hypertext and Hypermedia; Semantic Web; Web Engineering; Web Services; and Service-Oriented Computing XML. In addition, papers addressing the intersection of the following broader technologies with the Web are also in scope: Accessibility; Business Services Education; Knowledge Management and Representation; Mobility and pervasive computing; Performance and scalability; Recommender systems; Searching, Indexing, Classification, Retrieval and Querying, Data Mining and Analysis; Security and Privacy; and User Interfaces. Papers discussing specific Web technologies, applications, content generation and management and use are within scope. Also, papers describing novel applications of the web as well as papers on the underlying technologies are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信