Gulraiz Khan, Muhammad Ali Farooq, Junaid Hussain, Zeeshan Tariq, Muhammad Usman Ghani Khan
{"title":"基于深度并发卷积神经网络的群体变异分类","authors":"Gulraiz Khan, Muhammad Ali Farooq, Junaid Hussain, Zeeshan Tariq, Muhammad Usman Ghani Khan","doi":"10.23919/ICACS.2019.8689129","DOIUrl":null,"url":null,"abstract":"Visual understanding of crowd scenes is a challenging and important issue in computer vision domain. Identification of crowd type is a basic requirement for analyzing crowd scenarios. With the advancement of deep convolution neural networks image recognition problems have become easy. In this paper, we propose a novel architecture (DeepCrowd) inspired by Resnet to incorporate spatial features comprehensively. To train and evaluate proposed system, a robust and unique dataset of nearly six thousand images is generated. Evaluating the system extensively highlighted accuracy of 83.11% that is comparable with others state-of-the-art methods.","PeriodicalId":290819,"journal":{"name":"2019 2nd International Conference on Advancements in Computational Sciences (ICACS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Categorization of Crowd Varieties using Deep Concurrent Convolution Neural Network\",\"authors\":\"Gulraiz Khan, Muhammad Ali Farooq, Junaid Hussain, Zeeshan Tariq, Muhammad Usman Ghani Khan\",\"doi\":\"10.23919/ICACS.2019.8689129\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual understanding of crowd scenes is a challenging and important issue in computer vision domain. Identification of crowd type is a basic requirement for analyzing crowd scenarios. With the advancement of deep convolution neural networks image recognition problems have become easy. In this paper, we propose a novel architecture (DeepCrowd) inspired by Resnet to incorporate spatial features comprehensively. To train and evaluate proposed system, a robust and unique dataset of nearly six thousand images is generated. Evaluating the system extensively highlighted accuracy of 83.11% that is comparable with others state-of-the-art methods.\",\"PeriodicalId\":290819,\"journal\":{\"name\":\"2019 2nd International Conference on Advancements in Computational Sciences (ICACS)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 2nd International Conference on Advancements in Computational Sciences (ICACS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ICACS.2019.8689129\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 2nd International Conference on Advancements in Computational Sciences (ICACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICACS.2019.8689129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Categorization of Crowd Varieties using Deep Concurrent Convolution Neural Network
Visual understanding of crowd scenes is a challenging and important issue in computer vision domain. Identification of crowd type is a basic requirement for analyzing crowd scenarios. With the advancement of deep convolution neural networks image recognition problems have become easy. In this paper, we propose a novel architecture (DeepCrowd) inspired by Resnet to incorporate spatial features comprehensively. To train and evaluate proposed system, a robust and unique dataset of nearly six thousand images is generated. Evaluating the system extensively highlighted accuracy of 83.11% that is comparable with others state-of-the-art methods.