{"title":"面向细粒度图像分类的多关注多深度学习","authors":"Zuhua Dai, Hongyi Li, Kelong Li, Anwei Zhou","doi":"10.1109/ICHCI51889.2020.00052","DOIUrl":null,"url":null,"abstract":"Compared with the traditional image classification task, fine-grained image classification has the difficulty of small differences between classes and large differences within classes. In view of this difficulty, attention proposal has been widely used in fine-grained image classification. However, traditional attention proposal has to localize first and then processing. Model needs to run step by step and the attention focusing method is single. This paper proposed a model (MAMDL, Multi-Attention-Multi-Depth-Learning) which combines multiple attention mechanisms and multi network parallel learning. The advantage of MAMDL is that it can first learn end-to-end. Secondly, the multiple attention mechanisms can effectively combine four attention mechanisms to improve the network’s ability to process local features. Finally, this paper focuses on the attention found in the backbone network, Feature extraction from branch convolution neural networks with different depths enhances the classification performance of the model. The experimental results show that MAMDL outperforms mainstream fine-grained image classification methods on the fine-grained image classification dataset CUB-200, Stanford dogs and Stanford cars.","PeriodicalId":355427,"journal":{"name":"2020 International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Depth Learning with Multi-Attention for fine-grained image classification\",\"authors\":\"Zuhua Dai, Hongyi Li, Kelong Li, Anwei Zhou\",\"doi\":\"10.1109/ICHCI51889.2020.00052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Compared with the traditional image classification task, fine-grained image classification has the difficulty of small differences between classes and large differences within classes. In view of this difficulty, attention proposal has been widely used in fine-grained image classification. However, traditional attention proposal has to localize first and then processing. Model needs to run step by step and the attention focusing method is single. This paper proposed a model (MAMDL, Multi-Attention-Multi-Depth-Learning) which combines multiple attention mechanisms and multi network parallel learning. The advantage of MAMDL is that it can first learn end-to-end. Secondly, the multiple attention mechanisms can effectively combine four attention mechanisms to improve the network’s ability to process local features. Finally, this paper focuses on the attention found in the backbone network, Feature extraction from branch convolution neural networks with different depths enhances the classification performance of the model. The experimental results show that MAMDL outperforms mainstream fine-grained image classification methods on the fine-grained image classification dataset CUB-200, Stanford dogs and Stanford cars.\",\"PeriodicalId\":355427,\"journal\":{\"name\":\"2020 International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICHCI51889.2020.00052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICHCI51889.2020.00052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
与传统的图像分类任务相比,细粒度图像分类存在类间差异小、类内差异大的困难。鉴于这一困难,注意力建议被广泛应用于细粒度图像分类。然而,传统的注意力建议必须先进行定位,然后再进行处理。模型需要逐级运行,关注方法单一。本文提出了一种结合多注意机制和多网络并行学习的多注意-多深度学习模型(MAMDL, multi - attention - multi - depth - learning)。MAMDL的优点是它首先可以端到端学习。其次,多注意机制可以有效地将四种注意机制结合起来,提高网络对局部特征的处理能力。最后,针对骨干网中发现的问题,从不同深度的分支卷积神经网络中提取特征,增强了模型的分类性能。实验结果表明,在细粒度图像分类数据CUB-200、Stanford dogs和Stanford cars上,MAMDL优于主流的细粒度图像分类方法。
Multi-Depth Learning with Multi-Attention for fine-grained image classification
Compared with the traditional image classification task, fine-grained image classification has the difficulty of small differences between classes and large differences within classes. In view of this difficulty, attention proposal has been widely used in fine-grained image classification. However, traditional attention proposal has to localize first and then processing. Model needs to run step by step and the attention focusing method is single. This paper proposed a model (MAMDL, Multi-Attention-Multi-Depth-Learning) which combines multiple attention mechanisms and multi network parallel learning. The advantage of MAMDL is that it can first learn end-to-end. Secondly, the multiple attention mechanisms can effectively combine four attention mechanisms to improve the network’s ability to process local features. Finally, this paper focuses on the attention found in the backbone network, Feature extraction from branch convolution neural networks with different depths enhances the classification performance of the model. The experimental results show that MAMDL outperforms mainstream fine-grained image classification methods on the fine-grained image classification dataset CUB-200, Stanford dogs and Stanford cars.