{"title":"高效next在NXP i.MX 8M Mini上的部署","authors":"Abhishek Deokar, Mohamed El-Sharkawy","doi":"10.1109/ICICT58900.2023.00035","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks make tasks of computer vision like image classification and object tracking possible. The advances in accelerator hardware have made the progress in neural networks possible. Accelerator hardware is prevalent on desktops and high-end computing systems and may not always be available on low compute devices deployed on the Edge of Internet of Things. The capabilities of neural network need to be ported to hardware that can run without accelerators. Benchmark setting neural networks like EfficientNet are too heavy for deployment on systems with low compute capabilities and can benefit from reduction in their memory footprint and optimized to improve their inference times. To this end we propose the design of EfficientNeXt and demonstrate its inference capabilities with reduced memory footprint (by $\\sim$56%), increased accuracy and reduced inference time (by $\\sim$30%) on an ARM based device.","PeriodicalId":425057,"journal":{"name":"2023 6th International Conference on Information and Computer Technologies (ICICT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deployment of Proposed EfficientNeXt on NXP i.MX 8M Mini\",\"authors\":\"Abhishek Deokar, Mohamed El-Sharkawy\",\"doi\":\"10.1109/ICICT58900.2023.00035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional Neural Networks make tasks of computer vision like image classification and object tracking possible. The advances in accelerator hardware have made the progress in neural networks possible. Accelerator hardware is prevalent on desktops and high-end computing systems and may not always be available on low compute devices deployed on the Edge of Internet of Things. The capabilities of neural network need to be ported to hardware that can run without accelerators. Benchmark setting neural networks like EfficientNet are too heavy for deployment on systems with low compute capabilities and can benefit from reduction in their memory footprint and optimized to improve their inference times. To this end we propose the design of EfficientNeXt and demonstrate its inference capabilities with reduced memory footprint (by $\\\\sim$56%), increased accuracy and reduced inference time (by $\\\\sim$30%) on an ARM based device.\",\"PeriodicalId\":425057,\"journal\":{\"name\":\"2023 6th International Conference on Information and Computer Technologies (ICICT)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 6th International Conference on Information and Computer Technologies (ICICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICT58900.2023.00035\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 6th International Conference on Information and Computer Technologies (ICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICT58900.2023.00035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deployment of Proposed EfficientNeXt on NXP i.MX 8M Mini
Convolutional Neural Networks make tasks of computer vision like image classification and object tracking possible. The advances in accelerator hardware have made the progress in neural networks possible. Accelerator hardware is prevalent on desktops and high-end computing systems and may not always be available on low compute devices deployed on the Edge of Internet of Things. The capabilities of neural network need to be ported to hardware that can run without accelerators. Benchmark setting neural networks like EfficientNet are too heavy for deployment on systems with low compute capabilities and can benefit from reduction in their memory footprint and optimized to improve their inference times. To this end we propose the design of EfficientNeXt and demonstrate its inference capabilities with reduced memory footprint (by $\sim$56%), increased accuracy and reduced inference time (by $\sim$30%) on an ARM based device.