View Interpolation Networks for Reproducing Material Appearance of Specular Objects

Q1 Computer Science
Chihiro Hoshizawa, Takashi Komuro
{"title":"View Interpolation Networks for Reproducing Material Appearance of Specular Objects","authors":"Chihiro Hoshizawa,&nbsp;Takashi Komuro","doi":"10.1016/j.vrih.2022.11.001","DOIUrl":null,"url":null,"abstract":"<div><p>In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 1-10"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579622001164","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

Abstract

In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.

再现镜面反射物体材料外观的视图插值网络
在这项研究中,我们提出了视图插值网络来再现物体表面亮度随观看方向的变化,这对再现真实物体的材料外观很重要。我们使用U-Net的原始版本和修改版本进行图像转换。这些网络被训练成从放置在正方形角落的四个相机的中间视点生成图像。我们用三种不同的方法和训练数据格式组合进行了一项实验。我们发现,最好将视点的坐标与四个相机图像一起输入,并使用来自随机视点的图像作为训练数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Virtual Reality  Intelligent Hardware
Virtual Reality Intelligent Hardware Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
6.40
自引率
0.00%
发文量
35
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信