基于特征融合和注意机制的人群计数方法

Jiaming Niu, Guobin Li, Yu Yang
{"title":"基于特征融合和注意机制的人群计数方法","authors":"Jiaming Niu, Guobin Li, Yu Yang","doi":"10.1109/AIID51893.2021.9456541","DOIUrl":null,"url":null,"abstract":"Aiming at the problem of background noise interference and occlusion in complex crowded crowd scenes, a crowd counting network FANet based on feature fusion and attention mechanism is proposed. By introducing a feature fusion layer and a crowd region recognition module, FANet can effectively eliminate the influence of background interference and occlusion, thereby improving counting performance. As a supplement to the feature extraction network, the feature fusion layer aims to fuse low-level texture features and high-level features to avoid a large amount of loss of features, thereby enabling the model to have higher multi-scale information perception capabilities and improving training efficiency. The crowd region recognition module generates a corresponding attention weight map for the image through convolution and up-sampling operations, and based on this, achieves the purpose of suppressing background interference. Finally, the evaluation was conducted on two data sets. The experiment showed that the MAE of the proposed method on ShanghaiTech and UCF-QNRF achieved 1.1%,3% and 1.1% improvement respectively.","PeriodicalId":412698,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Crowd counting method based on feature fusion and attention mechanism\",\"authors\":\"Jiaming Niu, Guobin Li, Yu Yang\",\"doi\":\"10.1109/AIID51893.2021.9456541\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aiming at the problem of background noise interference and occlusion in complex crowded crowd scenes, a crowd counting network FANet based on feature fusion and attention mechanism is proposed. By introducing a feature fusion layer and a crowd region recognition module, FANet can effectively eliminate the influence of background interference and occlusion, thereby improving counting performance. As a supplement to the feature extraction network, the feature fusion layer aims to fuse low-level texture features and high-level features to avoid a large amount of loss of features, thereby enabling the model to have higher multi-scale information perception capabilities and improving training efficiency. The crowd region recognition module generates a corresponding attention weight map for the image through convolution and up-sampling operations, and based on this, achieves the purpose of suppressing background interference. Finally, the evaluation was conducted on two data sets. The experiment showed that the MAE of the proposed method on ShanghaiTech and UCF-QNRF achieved 1.1%,3% and 1.1% improvement respectively.\",\"PeriodicalId\":412698,\"journal\":{\"name\":\"2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIID51893.2021.9456541\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIID51893.2021.9456541","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

针对复杂拥挤人群场景中背景噪声干扰和遮挡问题,提出了一种基于特征融合和注意机制的人群计数网络FANet。FANet通过引入特征融合层和人群区域识别模块,可以有效消除背景干扰和遮挡的影响,从而提高计数性能。特征融合层作为特征提取网络的补充,旨在融合底层纹理特征和高层特征,避免大量特征丢失,从而使模型具有更高的多尺度信息感知能力,提高训练效率。人群区域识别模块通过卷积和上采样操作为图像生成相应的关注权图,以此达到抑制背景干扰的目的。最后,对两个数据集进行了评价。实验表明,该方法在上海科技和UCF-QNRF上的MAE分别提高了1.1%、3%和1.1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Crowd counting method based on feature fusion and attention mechanism
Aiming at the problem of background noise interference and occlusion in complex crowded crowd scenes, a crowd counting network FANet based on feature fusion and attention mechanism is proposed. By introducing a feature fusion layer and a crowd region recognition module, FANet can effectively eliminate the influence of background interference and occlusion, thereby improving counting performance. As a supplement to the feature extraction network, the feature fusion layer aims to fuse low-level texture features and high-level features to avoid a large amount of loss of features, thereby enabling the model to have higher multi-scale information perception capabilities and improving training efficiency. The crowd region recognition module generates a corresponding attention weight map for the image through convolution and up-sampling operations, and based on this, achieves the purpose of suppressing background interference. Finally, the evaluation was conducted on two data sets. The experiment showed that the MAE of the proposed method on ShanghaiTech and UCF-QNRF achieved 1.1%,3% and 1.1% improvement respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信