Shuai Jiang, Di Zhao, Tao Wang, Jing Zhang, Xiao Sun
{"title":"Hard Anchor Attention in Anchor-based Detector","authors":"Shuai Jiang, Di Zhao, Tao Wang, Jing Zhang, Xiao Sun","doi":"10.1145/3529836.3529940","DOIUrl":null,"url":null,"abstract":"In the anchor-based object detector, the redundancy introduced by the symmetry of anchor generator will be harmful for the diversity of positive anchors and cause performance drop. A simple yet effective sampling strategy called Hard Anchor Attention (HAA) is proposed in this paper. First, the anchor generator is re-examined by studying the contribution of different samples to the overall performance. It is verified that the harder positive anchors play an important role in the training of the detector. Then the HAA is introduced to evaluate the difficulty of refining anchors, and direct the focus of the training process to such harder anchors. The experimental results demonstrate that HAA can bring performance gains to RetinaNet and further releases the subsequent branches. Particularly, without fine-tuning, on the Pascal VOC dataset, HAA outperforms the random sampling and all-in baseline.","PeriodicalId":285191,"journal":{"name":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529836.3529940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the anchor-based object detector, the redundancy introduced by the symmetry of anchor generator will be harmful for the diversity of positive anchors and cause performance drop. A simple yet effective sampling strategy called Hard Anchor Attention (HAA) is proposed in this paper. First, the anchor generator is re-examined by studying the contribution of different samples to the overall performance. It is verified that the harder positive anchors play an important role in the training of the detector. Then the HAA is introduced to evaluate the difficulty of refining anchors, and direct the focus of the training process to such harder anchors. The experimental results demonstrate that HAA can bring performance gains to RetinaNet and further releases the subsequent branches. Particularly, without fine-tuning, on the Pascal VOC dataset, HAA outperforms the random sampling and all-in baseline.