Zhoufeng Liu, Chi Zhang, Chunlei Li, S. Ding, Shanliang Liu, Yan Dong
{"title":"Deep Neural Networks Optimization Based On Deconvolutional Networks","authors":"Zhoufeng Liu, Chi Zhang, Chunlei Li, S. Ding, Shanliang Liu, Yan Dong","doi":"10.1145/3282286.3282299","DOIUrl":null,"url":null,"abstract":"Feature extraction is the most important part of the whole object recognition and target detection system. Convolutional Networks have evolved to the state-of-the-art technique for computer vision tasks owing to the predominant feature extraction capability. However, the working process of Convolutional Networks is invisible, which makes it difficult to optimize the model. To evaluate a Convolutional Network, we introduce a novel way to project the activities back to the input pixel space, revealing what input pattern originally caused a specific activation in the feature maps. Using this visualization technique, we take the feature extraction of sunflower seed image containing an impurity as an example, and attempt to change the architecture of traditional Convolutional Networks in order to extract better specific features for target images. After a series of improvements, we got a new Convolutional Network which is more conducive to the target images feature extraction and the number of parameters is less than before, which is conducive to the transplantation of the small system. Our model can be docking the state-of-the-art recognition networks according to different application scenarios, so as to structure a complete automatic recognition system.","PeriodicalId":324982,"journal":{"name":"Proceedings of the 2nd International Conference on Graphics and Signal Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Conference on Graphics and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3282286.3282299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Feature extraction is the most important part of the whole object recognition and target detection system. Convolutional Networks have evolved to the state-of-the-art technique for computer vision tasks owing to the predominant feature extraction capability. However, the working process of Convolutional Networks is invisible, which makes it difficult to optimize the model. To evaluate a Convolutional Network, we introduce a novel way to project the activities back to the input pixel space, revealing what input pattern originally caused a specific activation in the feature maps. Using this visualization technique, we take the feature extraction of sunflower seed image containing an impurity as an example, and attempt to change the architecture of traditional Convolutional Networks in order to extract better specific features for target images. After a series of improvements, we got a new Convolutional Network which is more conducive to the target images feature extraction and the number of parameters is less than before, which is conducive to the transplantation of the small system. Our model can be docking the state-of-the-art recognition networks according to different application scenarios, so as to structure a complete automatic recognition system.