An adaptive correlation based video data mining using machine learning

K. Lingam, V. Reddy
{"title":"An adaptive correlation based video data mining using machine learning","authors":"K. Lingam, V. Reddy","doi":"10.35940/ijrte.d5437.118419","DOIUrl":null,"url":null,"abstract":"With the immense growth in the multimedia contents for education and other purposes, the availability of the video contents has also increased Nevertheless, the retrieval of the content is always a challenge. The identification of two video contents based on internal content similarity highly depends on extraction of key frames and that makes the process highly time complex. In the recent time, many of research attempts have tried to approach this problem with the intention to reduce the time complexity using various methods such as video to text conversion and further analysing both extracted text similarity analysis. Regardless to mention, this strategy is again language dependent and criticised for various reasons like local language dependencies and language paraphrase dependencies. Henceforth, this work approaches the problem with a different dimension with reduction possibilities of the video key frames using adaptive similarity. The proposed method analyses the key frames extracted from the library content and from the search video data based on various parameters and reduces the key frames using adaptive similarity. Also, this work uses machine learning and parallel programming algorithms to reduce the time complexity to a greater extend. The final outcome of this work is a reduced time complex algorithm for video data-based search to video content retrieval. The work demonstrates a nearly 50% reduction in the key frame without losing information with nearly 70% reduction in time complexity and 100% accuracy on search results.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Knowl. Based Intell. Eng. Syst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.35940/ijrte.d5437.118419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

With the immense growth in the multimedia contents for education and other purposes, the availability of the video contents has also increased Nevertheless, the retrieval of the content is always a challenge. The identification of two video contents based on internal content similarity highly depends on extraction of key frames and that makes the process highly time complex. In the recent time, many of research attempts have tried to approach this problem with the intention to reduce the time complexity using various methods such as video to text conversion and further analysing both extracted text similarity analysis. Regardless to mention, this strategy is again language dependent and criticised for various reasons like local language dependencies and language paraphrase dependencies. Henceforth, this work approaches the problem with a different dimension with reduction possibilities of the video key frames using adaptive similarity. The proposed method analyses the key frames extracted from the library content and from the search video data based on various parameters and reduces the key frames using adaptive similarity. Also, this work uses machine learning and parallel programming algorithms to reduce the time complexity to a greater extend. The final outcome of this work is a reduced time complex algorithm for video data-based search to video content retrieval. The work demonstrates a nearly 50% reduction in the key frame without losing information with nearly 70% reduction in time complexity and 100% accuracy on search results.
基于机器学习的自适应相关性视频数据挖掘
随着用于教育和其他目的的多媒体内容的巨大增长,视频内容的可用性也在增加,然而,内容的检索一直是一个挑战。基于内部内容相似度的视频内容识别高度依赖于关键帧的提取,这使得该过程具有高度的时间复杂度。近年来,许多研究尝试尝试解决这一问题,试图通过各种方法降低时间复杂度,如视频到文本的转换和进一步分析提取的文本相似度分析。不管怎么说,这个策略也是语言依赖的,并且因为各种原因受到批评,比如本地语言依赖和语言释义依赖。因此,本研究使用自适应相似度对视频关键帧进行不同维度的约简。该方法对从库内容中提取的关键帧和从搜索视频数据中提取的关键帧进行分析,并利用自适应相似度对关键帧进行缩减。此外,这项工作使用机器学习和并行编程算法来更大程度地降低时间复杂度。本工作的最终成果是一个基于视频数据的搜索到视频内容检索的时间复杂度降低的算法。该研究表明,在不丢失信息的情况下,关键帧减少了近50%,时间复杂度降低了近70%,搜索结果的准确率达到100%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信