{"title":"Hardware acceleration of Tiny YOLO deep neural networks for sign language recognition: A comprehensive performance analysis","authors":"Mohita Jaiswal, Abhishek Sharma, Sandeep Saini","doi":"10.1016/j.vlsi.2024.102287","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we benchmark two automation frameworks, Vitis AI and FINN, for sign language recognition on a Field Programmable Gate Array (FPGA). We conducted an in-depth exploration of both frameworks using Tiny YOLOv2 networks by varying design parameters such as precision, parallelism ratio, etc. Further, a fair baseline comparison is made based on accuracy, speed, and hardware resources. Experimental findings demonstrate that the Vitis AI outperforms the FINN framework and traditional GPU and CPU platforms by achieving significant improvements of 1.08x, 1.7x, and 2.9x in terms of latency. Leveraging Vitis AI, our system achieved a detection speed of 32.7 frames per second (FPS) on the Kria KV260 FPGA with a power consumption rate of 5.6 W and an impressive mean Average Precision (mAP) score of 61.2% on the Hindi Indian Sign Language (ISL) dataset.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"100 ","pages":"Article 102287"},"PeriodicalIF":2.2000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integration-The Vlsi Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167926024001512","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we benchmark two automation frameworks, Vitis AI and FINN, for sign language recognition on a Field Programmable Gate Array (FPGA). We conducted an in-depth exploration of both frameworks using Tiny YOLOv2 networks by varying design parameters such as precision, parallelism ratio, etc. Further, a fair baseline comparison is made based on accuracy, speed, and hardware resources. Experimental findings demonstrate that the Vitis AI outperforms the FINN framework and traditional GPU and CPU platforms by achieving significant improvements of 1.08x, 1.7x, and 2.9x in terms of latency. Leveraging Vitis AI, our system achieved a detection speed of 32.7 frames per second (FPS) on the Kria KV260 FPGA with a power consumption rate of 5.6 W and an impressive mean Average Precision (mAP) score of 61.2% on the Hindi Indian Sign Language (ISL) dataset.
期刊介绍:
Integration''s aim is to cover every aspect of the VLSI area, with an emphasis on cross-fertilization between various fields of science, and the design, verification, test and applications of integrated circuits and systems, as well as closely related topics in process and device technologies. Individual issues will feature peer-reviewed tutorials and articles as well as reviews of recent publications. The intended coverage of the journal can be assessed by examining the following (non-exclusive) list of topics:
Specification methods and languages; Analog/Digital Integrated Circuits and Systems; VLSI architectures; Algorithms, methods and tools for modeling, simulation, synthesis and verification of integrated circuits and systems of any complexity; Embedded systems; High-level synthesis for VLSI systems; Logic synthesis and finite automata; Testing, design-for-test and test generation algorithms; Physical design; Formal verification; Algorithms implemented in VLSI systems; Systems engineering; Heterogeneous systems.