{"title":"重新着色深图像","authors":"Rob Pieké, Yanli Zhao, F. Arrizabalaga","doi":"10.1145/3233085.3233095","DOIUrl":null,"url":null,"abstract":"This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can \"spill\" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., \"just get lighting to split that out into a separate pass\", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.","PeriodicalId":378765,"journal":{"name":"Proceedings of the 8th Annual Digital Production Symposium","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Recolouring deep images\",\"authors\":\"Rob Pieké, Yanli Zhao, F. Arrizabalaga\",\"doi\":\"10.1145/3233085.3233095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can \\\"spill\\\" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., \\\"just get lighting to split that out into a separate pass\\\", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.\",\"PeriodicalId\":378765,\"journal\":{\"name\":\"Proceedings of the 8th Annual Digital Production Symposium\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 8th Annual Digital Production Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3233085.3233095\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th Annual Digital Production Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3233085.3233095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
这项工作描述了正在进行的研究,以调查在深度图像中操纵和/或校正样本颜色的方法。想要这样做的动机包括,但不限于:偏好通过只渲染深alpha图像来最小化数据足迹,在Nuke中为2D(即非深度)图像提供更好的颜色处理工具,以及渲染后去噪。最naïve的方式(重新)颜色深图像与2D RGB图像是通过Nuke的DeepRecolor。这有效地将2D像素的RGB颜色投射到相应深度像素的每个样本上- rgbdeep(x, y, z) = rgb2d(x, y)。这种方法有许多局限性:在应用景深作为后处理时引入晕(参见下面的图2),以及当其他对象在它们之间合成时,明亮背景对象可能“溢出”到前景对象的边缘的边缘伪影(参见上面的图1)。[Egstad et al. 2015]在OpenDCX上的工作可能是我们在这个领域看到的最先进的,但它似乎仍然缺乏广泛的采用。此外,我们继续确定其他问题/工作流,并因此决定追求我们自己对整个问题空间的蓝天思维。我们所描述的许多问题可能在概念上很容易通过改变上游部门的工作流程来解决(例如,“只需将灯光拆分为单独的通道”等),但是随着最后期限的临近,与这些类型的建议相关的实际挑战通常是令人望而却步的。
This work describes in-progress research to investigate methods for manipulating and/or correcting the colours of samples in deep images. Motivations for wanting this include, but are not limited to: a preference to minimise data footprints by only rendering deep alpha images, better colour manipulation tools in Nuke for 2D (i.e., not-deep) images, and post-render denoising. The most naïve way to (re)colour deep images with 2D RGB images is via Nuke's DeepRecolor. This effectively projects the RGB colour of a 2D pixel onto each sample of the corresponding deep pixel - rgbdeep(x, y, z) = rgb2d(x, y). This approach has many limitations: introducing halos when applying depth-of-field as a post-process (see Figure 2 below), and edge artefacts where bright background objects can "spill" into the edges of foreground objects when other objects are composited between them (see Figure 1 above). The work by [Egstad et al. 2015] on OpenDCX is perhaps the most advanced we've seen presented in this area, but it still seems to lack broad adoption. Further, we continued to identify other issues/workflows, and thus decided to pursue our own blue-sky thinking about the overall problem space. Much of what we describe may be conceptually easy to solve by changing upstream departments' workflows (e.g., "just get lighting to split that out into a separate pass", etc), but the practical challenges associated with these types of suggestions are often prohibitive as deadlines start looming.