{"title":"位置感知快速神经网络(LFNN):一种基于位置感知的实时视频查询神经网络加速框架","authors":"Xiaotian Ma, Jiaqi Tang, Y. Bai","doi":"10.1109/ISQED57927.2023.10129395","DOIUrl":null,"url":null,"abstract":"As deep neural networks have continuously advanced computer version tasks, researchers from academia and industry focus on developing a powerful deep neural network model to process volumes of data. With the increasing size of DNN models, their inference process is computationally expensive and limits the employment of DNNs in real-time applications.In response, we present the proposed Locality-sensing Fast Neural Network (LFNN), a generalized framework for accelerating the querying videos process via locality sensing to reduce the cost of DNN in video evaluation by three times saving in inference time. The LFNN framework can automatically sense the similarity between two input frames via a defined locality from a given input video. The LFNN framework enables us to process the input videos within the specialized processing method that is far less computationally expensive than conventional DNN inference that conducts detection for each frame. Within the highlighted temporal locality information across frames, the Yolov5 algorithm can be accelerated by two to three times. Experimental results show that the proposed LFNN is easily implemented on the FPGA board with neglectable extra hardware costs.","PeriodicalId":315053,"journal":{"name":"2023 24th International Symposium on Quality Electronic Design (ISQED)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Locality-sensing Fast Neural Network (LFNN): An Efficient Neural Network Acceleration Framework via Locality Sensing for Real-time Videos Queries\",\"authors\":\"Xiaotian Ma, Jiaqi Tang, Y. Bai\",\"doi\":\"10.1109/ISQED57927.2023.10129395\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As deep neural networks have continuously advanced computer version tasks, researchers from academia and industry focus on developing a powerful deep neural network model to process volumes of data. With the increasing size of DNN models, their inference process is computationally expensive and limits the employment of DNNs in real-time applications.In response, we present the proposed Locality-sensing Fast Neural Network (LFNN), a generalized framework for accelerating the querying videos process via locality sensing to reduce the cost of DNN in video evaluation by three times saving in inference time. The LFNN framework can automatically sense the similarity between two input frames via a defined locality from a given input video. The LFNN framework enables us to process the input videos within the specialized processing method that is far less computationally expensive than conventional DNN inference that conducts detection for each frame. Within the highlighted temporal locality information across frames, the Yolov5 algorithm can be accelerated by two to three times. Experimental results show that the proposed LFNN is easily implemented on the FPGA board with neglectable extra hardware costs.\",\"PeriodicalId\":315053,\"journal\":{\"name\":\"2023 24th International Symposium on Quality Electronic Design (ISQED)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 24th International Symposium on Quality Electronic Design (ISQED)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISQED57927.2023.10129395\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 24th International Symposium on Quality Electronic Design (ISQED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISQED57927.2023.10129395","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Locality-sensing Fast Neural Network (LFNN): An Efficient Neural Network Acceleration Framework via Locality Sensing for Real-time Videos Queries
As deep neural networks have continuously advanced computer version tasks, researchers from academia and industry focus on developing a powerful deep neural network model to process volumes of data. With the increasing size of DNN models, their inference process is computationally expensive and limits the employment of DNNs in real-time applications.In response, we present the proposed Locality-sensing Fast Neural Network (LFNN), a generalized framework for accelerating the querying videos process via locality sensing to reduce the cost of DNN in video evaluation by three times saving in inference time. The LFNN framework can automatically sense the similarity between two input frames via a defined locality from a given input video. The LFNN framework enables us to process the input videos within the specialized processing method that is far less computationally expensive than conventional DNN inference that conducts detection for each frame. Within the highlighted temporal locality information across frames, the Yolov5 algorithm can be accelerated by two to three times. Experimental results show that the proposed LFNN is easily implemented on the FPGA board with neglectable extra hardware costs.