D. Ammous, Achraf Chabbouh, Awatef Edhib, A. Chaari, F. Kammoun, N. Masmoudi
{"title":"改进的YOLOv3-tiny轮廓检测使用正则化技术","authors":"D. Ammous, Achraf Chabbouh, Awatef Edhib, A. Chaari, F. Kammoun, N. Masmoudi","doi":"10.34028/iajit/20/2/14","DOIUrl":null,"url":null,"abstract":"Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved You Only Look Once (YOLO)v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model . The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.","PeriodicalId":13624,"journal":{"name":"Int. Arab J. Inf. Technol.","volume":"10 1","pages":"270-281"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Improved YOLOv3-tiny for silhouette detection using regularisation techniques\",\"authors\":\"D. Ammous, Achraf Chabbouh, Awatef Edhib, A. Chaari, F. Kammoun, N. Masmoudi\",\"doi\":\"10.34028/iajit/20/2/14\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved You Only Look Once (YOLO)v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model . The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.\",\"PeriodicalId\":13624,\"journal\":{\"name\":\"Int. Arab J. Inf. Technol.\",\"volume\":\"10 1\",\"pages\":\"270-281\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. Arab J. Inf. Technol.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.34028/iajit/20/2/14\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. Arab J. Inf. Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34028/iajit/20/2/14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
摘要
尽管深度学习(DL)算法在许多计算机视觉(CV)任务中取得了最新进展,具有很高的准确性,但在视频流中检测人类仍然是一个具有挑战性的问题。因此,一些研究集中在正则化技术上,以防止过度拟合问题,这是机器学习(ML)领域最基本的问题之一。同样,本文也对这些技术进行了全面的研究,提出了一种基于修改后的神经网络和调整后的超参数文件配置的改进的You Only Look Once (YOLO)v3-tiny。实验结果表明,该方法比YOLOv3-tiny模型更有效。第一次只包含数据增强技术的测试表明,该方法比原始的YOLOv3-tiny模型具有更高的准确率。事实上,与初始模型相比,视觉对象类(VOC)测试数据集的准确率提高了32.54%。结合三个任务的第二次测试表明,所采用的组合方法优于现有模型。例如,与数据增强模型相比,标记的人群测试数据集准确率提高了22.7%。
Improved YOLOv3-tiny for silhouette detection using regularisation techniques
Although recent advances in Deep Learning (DL) algorithms have been developed in many Computer Vision (CV) tasks with a high accuracy level, detecting humans in video streams is still a challenging problem. Several studies have, therefore, focused on the regularisation techniques to prevent the overfitting problem which is one of the most fundamental issues in the Machine Learning (ML) area. Likewise, this paper thoroughly examines these techniques, suggesting an improved You Only Look Once (YOLO)v3-tiny based on a modified neural network and an adjusted hyperparameters file configuration. The obtained experimental results, which are validated on two experimental tests, show that the proposed method is more effective than the YOLOv3-tiny predecessor model . The first test which includes only the data augmentation techniques indicates that the proposed approach reaches higher accuracy rates than the original YOLOv3-tiny model. Indeed, Visual Object Classes (VOC) test dataset accuracy rate increases by 32.54 % compared to the initial model. The second test which combines the three tasks reveals that the adopted combined method wins a gain over the existing model. For instance, the labelled crowd_human test dataset accuracy percentage rises by 22.7 % compared to the data augmentation model.