{"title":"NLP-Enriched Automatic Video Segmentation","authors":"Mohannad AlMousa, R. Benlamri, R. Khoury","doi":"10.1109/ICMCS.2018.8525880","DOIUrl":null,"url":null,"abstract":"E-learning environments are heavily dependent on videos as the main media to deliver lectures to learners. Despite the merits of video-based lectures, new challenges can paralyze the learning process. Challenges that deal with video content accessibility, such as searching, retrieving, explaining, matching, organizing, and even summarizing these contents, significantly limit the potential of video-based learning. In this paper, we propose a novel approach to segment video lectures and integrate Natural Language Processing (NLP) tasks to extract key linguistic features exist within the video. We exploit the benefits of visual, audio, and textual features in order to create comprehensive temporal feature vectors for the enhanced segmented video. Afterwards, we apply an NLP cosine similarity to the cluster and identify the various topics presented in the video. The final product would be an indexed, vector-based searchable video segments of a specific topic/subtopic","PeriodicalId":272255,"journal":{"name":"2018 6th International Conference on Multimedia Computing and Systems (ICMCS)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 6th International Conference on Multimedia Computing and Systems (ICMCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMCS.2018.8525880","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
E-learning environments are heavily dependent on videos as the main media to deliver lectures to learners. Despite the merits of video-based lectures, new challenges can paralyze the learning process. Challenges that deal with video content accessibility, such as searching, retrieving, explaining, matching, organizing, and even summarizing these contents, significantly limit the potential of video-based learning. In this paper, we propose a novel approach to segment video lectures and integrate Natural Language Processing (NLP) tasks to extract key linguistic features exist within the video. We exploit the benefits of visual, audio, and textual features in order to create comprehensive temporal feature vectors for the enhanced segmented video. Afterwards, we apply an NLP cosine similarity to the cluster and identify the various topics presented in the video. The final product would be an indexed, vector-based searchable video segments of a specific topic/subtopic