Toward Accurate and Robust Pedestrian Detection via Variational Inference

IF 11.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Huanyu He, Weiyao Lin, Yuang Zhang, Tianyao He, Yuxi Li, Jianguo Li
{"title":"Toward Accurate and Robust Pedestrian Detection via Variational Inference","authors":"Huanyu He, Weiyao Lin, Yuang Zhang, Tianyao He, Yuxi Li, Jianguo Li","doi":"10.1007/s11263-024-02216-2","DOIUrl":null,"url":null,"abstract":"<p>Pedestrian detection is notoriously considered a challenging task due to the frequent occlusion between humans. Unlike generic object detection, pedestrian detection involves a single category but dense instances, making it crucial to achieve accurate and robust object localization. By analogizing instance-level localization to a variational autoencoder and regarding the dense proposals as the latent variables, we establish a unique perspective of formulating pedestrian detection as a variational inference problem. From this vantage, we propose the Variational Pedestrian Detector (VPD), which uses a probabilistic model to estimate the true posterior of inferred proposals and applies a reparameterization trick to approximate the expected detection likelihood. In order to adapt the variational inference problem to the case of pedestrian detection, we propose a series of customized designs to cope with the issue of occlusion and spatial vibration. Specifically, we propose the Normal Gaussian and its variant of the Mixture model to parameterize the posterior in complicated scenarios. The inferred posterior is regularized by a conditional prior related to the ground-truth distribution, thus directly coupling the latent variables to specific target objects. Based on the posterior distribution, maximum detection likelihood estimation is applied to optimize the pedestrian detector, where a lightweight statistic decoder is designed to cast the detection likelihood into a parameterized form and enhance the confidence score estimation. With this variational inference process, VPD endows each proposal with the discriminative ability from its adjacent distractor due to the disentangling nature of the latent variable in variational inference, achieving accurate and robust detection in crowded scenes. Experiments conducted on CrowdHuman, CityPersons, and MS COCO demonstrate that our method is not only plug-and-play for numerous popular single-stage methods and two-stage methods but also can achieve a remarkable performance gain in highly occluded scenarios. The code for this project can be found at https://github.com/hhy-ee/VPD.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"5 1","pages":""},"PeriodicalIF":11.6000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Vision","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11263-024-02216-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Pedestrian detection is notoriously considered a challenging task due to the frequent occlusion between humans. Unlike generic object detection, pedestrian detection involves a single category but dense instances, making it crucial to achieve accurate and robust object localization. By analogizing instance-level localization to a variational autoencoder and regarding the dense proposals as the latent variables, we establish a unique perspective of formulating pedestrian detection as a variational inference problem. From this vantage, we propose the Variational Pedestrian Detector (VPD), which uses a probabilistic model to estimate the true posterior of inferred proposals and applies a reparameterization trick to approximate the expected detection likelihood. In order to adapt the variational inference problem to the case of pedestrian detection, we propose a series of customized designs to cope with the issue of occlusion and spatial vibration. Specifically, we propose the Normal Gaussian and its variant of the Mixture model to parameterize the posterior in complicated scenarios. The inferred posterior is regularized by a conditional prior related to the ground-truth distribution, thus directly coupling the latent variables to specific target objects. Based on the posterior distribution, maximum detection likelihood estimation is applied to optimize the pedestrian detector, where a lightweight statistic decoder is designed to cast the detection likelihood into a parameterized form and enhance the confidence score estimation. With this variational inference process, VPD endows each proposal with the discriminative ability from its adjacent distractor due to the disentangling nature of the latent variable in variational inference, achieving accurate and robust detection in crowded scenes. Experiments conducted on CrowdHuman, CityPersons, and MS COCO demonstrate that our method is not only plug-and-play for numerous popular single-stage methods and two-stage methods but also can achieve a remarkable performance gain in highly occluded scenarios. The code for this project can be found at https://github.com/hhy-ee/VPD.

Abstract Image

通过变异推理实现准确而稳健的行人检测
由于人与人之间经常出现遮挡,行人检测被认为是一项极具挑战性的任务。与一般的物体检测不同,行人检测涉及单一类别但密集的实例,因此实现准确、稳健的物体定位至关重要。通过将实例级定位类比为变异自动编码器,并将密集建议视为潜在变量,我们建立了将行人检测作为变异推理问题的独特视角。从这个角度出发,我们提出了变异行人检测器(VPD),它使用概率模型来估计推断建议的真实后验,并应用重参数化技巧来近似预期检测可能性。为了使变异推理问题适用于行人检测,我们提出了一系列定制设计,以应对遮挡和空间振动问题。具体来说,我们提出了正态高斯模型及其变体混合模型,用于在复杂情况下对后验进行参数化。推断出的后验由与地面实况分布相关的条件先验正则化,从而将潜变量与特定目标对象直接耦合。根据后验分布,应用最大检测似然估计来优化行人检测器,其中设计了一个轻量级统计解码器,将检测似然转换为参数化形式,并增强置信度估计。在这种变异推理过程中,由于变异推理中潜在变量的分离性质,VPD 赋予了每个提议与其相邻分心物的区分能力,从而在拥挤场景中实现了准确而稳健的检测。在 CrowdHuman、CityPersons 和 MS COCO 上进行的实验表明,我们的方法不仅可以即插即用于众多流行的单阶段方法和双阶段方法,而且可以在高度遮挡的场景中实现显著的性能提升。该项目的代码见 https://github.com/hhy-ee/VPD。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Vision
International Journal of Computer Vision 工程技术-计算机:人工智能
CiteScore
29.80
自引率
2.10%
发文量
163
审稿时长
6 months
期刊介绍: The International Journal of Computer Vision (IJCV) serves as a platform for sharing new research findings in the rapidly growing field of computer vision. It publishes 12 issues annually and presents high-quality, original contributions to the science and engineering of computer vision. The journal encompasses various types of articles to cater to different research outputs. Regular articles, which span up to 25 journal pages, focus on significant technical advancements that are of broad interest to the field. These articles showcase substantial progress in computer vision. Short articles, limited to 10 pages, offer a swift publication path for novel research outcomes. They provide a quicker means for sharing new findings with the computer vision community. Survey articles, comprising up to 30 pages, offer critical evaluations of the current state of the art in computer vision or offer tutorial presentations of relevant topics. These articles provide comprehensive and insightful overviews of specific subject areas. In addition to technical articles, the journal also includes book reviews, position papers, and editorials by prominent scientific figures. These contributions serve to complement the technical content and provide valuable perspectives. The journal encourages authors to include supplementary material online, such as images, video sequences, data sets, and software. This additional material enhances the understanding and reproducibility of the published research. Overall, the International Journal of Computer Vision is a comprehensive publication that caters to researchers in this rapidly growing field. It covers a range of article types, offers additional online resources, and facilitates the dissemination of impactful research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信