Human Outline Reconstruction in Depth Prediction

Xinyue Li, Samuel Cheng
{"title":"Human Outline Reconstruction in Depth Prediction","authors":"Xinyue Li, Samuel Cheng","doi":"10.1109/SPAC49953.2019.237867","DOIUrl":null,"url":null,"abstract":"Fully Convolutional Residual Network (FCRN) has already become one of the most significant models for depth map prediction. It has achieved high quality results but has problem in reconstructing the human outline. On this basis, we present our method, the purpose of which is to reinforce human reconstruction in depth prediction. Our main idea is to merge Mask R-CNN with FCRN, so we present our modified FCRN. Our modified FCRN, which can also be regarded as an improvement of FCRN through Mask R-CNN, is designed on the basis of attention mechanism and optimized on the basis of transfer learning. It needs to work with the original FCRN. For a single RGB image, first of all, Mask RCNN receives it as input and generates the mask images for the “person” instances. Then, the input image and the mask image are fed jointly to our modified FCRN which can give a new result in generating the depth map. After that, we present a depth filter to combine the raw result given by the original FCRN with the new result given by the modified FCRN. Our final result is generated through the depth filter. Both the image result and the metric result given by our experiment can illustrate that our method has the ability to improve the performance of FCRN in human outline reconstruction through Mask R-CNN.","PeriodicalId":410003,"journal":{"name":"2019 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPAC49953.2019.237867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Fully Convolutional Residual Network (FCRN) has already become one of the most significant models for depth map prediction. It has achieved high quality results but has problem in reconstructing the human outline. On this basis, we present our method, the purpose of which is to reinforce human reconstruction in depth prediction. Our main idea is to merge Mask R-CNN with FCRN, so we present our modified FCRN. Our modified FCRN, which can also be regarded as an improvement of FCRN through Mask R-CNN, is designed on the basis of attention mechanism and optimized on the basis of transfer learning. It needs to work with the original FCRN. For a single RGB image, first of all, Mask RCNN receives it as input and generates the mask images for the “person” instances. Then, the input image and the mask image are fed jointly to our modified FCRN which can give a new result in generating the depth map. After that, we present a depth filter to combine the raw result given by the original FCRN with the new result given by the modified FCRN. Our final result is generated through the depth filter. Both the image result and the metric result given by our experiment can illustrate that our method has the ability to improve the performance of FCRN in human outline reconstruction through Mask R-CNN.
深度预测中的人体轮廓重建
全卷积残差网络(FCRN)已经成为深度图预测中最重要的模型之一。它取得了高质量的结果,但在重建人体轮廓方面存在问题。在此基础上,我们提出了我们的方法,目的是在深度预测中加强人类重建。我们的主要思想是将Mask R-CNN与FCRN合并,因此我们提出了改进的FCRN。我们改进的FCRN也可以看作是通过Mask R-CNN对FCRN的改进,它基于注意机制进行设计,并基于迁移学习进行优化。它需要与原始的FCRN一起工作。对于单个RGB图像,首先,Mask RCNN将其作为输入接收,并为“person”实例生成掩码图像。然后,将输入图像和掩模图像联合馈送到改进的FCRN中,可以得到新的深度图生成结果。然后,我们提出了一个深度滤波器,将原始FCRN给出的原始结果与修改后的FCRN给出的新结果结合起来。我们的最终结果是通过深度过滤器生成的。实验给出的图像结果和度量结果都可以说明我们的方法能够通过Mask R-CNN提高FCRN在人体轮廓重建中的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信