Yuji Tsukagoshi, Yuuki Uranishi, J. Orlosky, Kiyomi Ito, H. Takemura
{"title":"彩虹学习器:基于结构颜色的AR标记的照明环境估计","authors":"Yuji Tsukagoshi, Yuuki Uranishi, J. Orlosky, Kiyomi Ito, H. Takemura","doi":"10.1109/AIVR50618.2020.00074","DOIUrl":null,"url":null,"abstract":"This paper proposes a method for estimating lighting environments from an AR marker coupled with the structural color patterns inherent to a compact disc (CD) form-factor. To achieve photometric consistency, these patterns are used as input to a Conditional Generative Adversarial Network (CGAN), which allows us to efficiently and quickly generate estimations of an environment map. We construct a dataset from pairs of images of the structural color pattern and environment map captured in multiple scenes, and the CGAN is then trained with this dataset. Experiments show that we can generate visually accurate reconstructions with this method for certain scenes, and that the environment map can be estimated in real time.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rainbow Learner: Lighting Environment Estimation from a Structural-color based AR Marker\",\"authors\":\"Yuji Tsukagoshi, Yuuki Uranishi, J. Orlosky, Kiyomi Ito, H. Takemura\",\"doi\":\"10.1109/AIVR50618.2020.00074\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a method for estimating lighting environments from an AR marker coupled with the structural color patterns inherent to a compact disc (CD) form-factor. To achieve photometric consistency, these patterns are used as input to a Conditional Generative Adversarial Network (CGAN), which allows us to efficiently and quickly generate estimations of an environment map. We construct a dataset from pairs of images of the structural color pattern and environment map captured in multiple scenes, and the CGAN is then trained with this dataset. Experiments show that we can generate visually accurate reconstructions with this method for certain scenes, and that the environment map can be estimated in real time.\",\"PeriodicalId\":348199,\"journal\":{\"name\":\"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIVR50618.2020.00074\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIVR50618.2020.00074","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Rainbow Learner: Lighting Environment Estimation from a Structural-color based AR Marker
This paper proposes a method for estimating lighting environments from an AR marker coupled with the structural color patterns inherent to a compact disc (CD) form-factor. To achieve photometric consistency, these patterns are used as input to a Conditional Generative Adversarial Network (CGAN), which allows us to efficiently and quickly generate estimations of an environment map. We construct a dataset from pairs of images of the structural color pattern and environment map captured in multiple scenes, and the CGAN is then trained with this dataset. Experiments show that we can generate visually accurate reconstructions with this method for certain scenes, and that the environment map can be estimated in real time.