使用条件生成对抗网络对视觉环境进行非侵入性评估

IF 7.6 1区 工程技术 Q1 CONSTRUCTION & BUILDING TECHNOLOGY
Sichen Lu , Dongjun Mah , Athanasios Tzempelikos
{"title":"使用条件生成对抗网络对视觉环境进行非侵入性评估","authors":"Sichen Lu ,&nbsp;Dongjun Mah ,&nbsp;Athanasios Tzempelikos","doi":"10.1016/j.buildenv.2025.113798","DOIUrl":null,"url":null,"abstract":"<div><div>Luminance monitoring within the field of view (FOV) is required for assessing visual comfort and overall visual preferences, but it is practically challenging and intrusive. As a result, real-time, human-centered daylighting operation remains a challenge. This paper presents a novel deep-learning based framework method to demonstrate that meaningful features in the occupant’s visual field can be extracted without invasive measurements. It is the first proof of concept to show that it is feasible to monitor luminance distributions as perceived by people, using a non-intrusive camera integrated with deep learning neural networks. A Conditional Generative Adversarial Network (CGAN), pix2pix is used to transfer information from non-intrusive images to FOV images. Two datasets were collected in an open-plan office with compact, low-cost High Dynamic Range Image (HDRI) cameras installed at two alternate locations (a wall or a monitor), to separately train two pix2pix models with the same target FOV images. The results show that the generated FOV images closely resemble the measured FOV images in terms of pixelwise luminance errors, mean luminance, and structural similarity. The main errors are due to bright scenes, visible through windows, confined to a very limited number of pixels. Overall, this work establishes a basis for future studies to assess the effect of visual environment on human perception using non-intrusive measurements. It also provides the theoretical foundation for a connected paper [<span><span>27</span></span>], which demonstrates that non-intrusive measurements and deep learning techniques can be used to discover daylight preferences and enable AI-assisted daylighting operation.</div></div>","PeriodicalId":9273,"journal":{"name":"Building and Environment","volume":"287 ","pages":"Article 113798"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Non-invasive assessment of the visual environment using conditional generative adversarial networks\",\"authors\":\"Sichen Lu ,&nbsp;Dongjun Mah ,&nbsp;Athanasios Tzempelikos\",\"doi\":\"10.1016/j.buildenv.2025.113798\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Luminance monitoring within the field of view (FOV) is required for assessing visual comfort and overall visual preferences, but it is practically challenging and intrusive. As a result, real-time, human-centered daylighting operation remains a challenge. This paper presents a novel deep-learning based framework method to demonstrate that meaningful features in the occupant’s visual field can be extracted without invasive measurements. It is the first proof of concept to show that it is feasible to monitor luminance distributions as perceived by people, using a non-intrusive camera integrated with deep learning neural networks. A Conditional Generative Adversarial Network (CGAN), pix2pix is used to transfer information from non-intrusive images to FOV images. Two datasets were collected in an open-plan office with compact, low-cost High Dynamic Range Image (HDRI) cameras installed at two alternate locations (a wall or a monitor), to separately train two pix2pix models with the same target FOV images. The results show that the generated FOV images closely resemble the measured FOV images in terms of pixelwise luminance errors, mean luminance, and structural similarity. The main errors are due to bright scenes, visible through windows, confined to a very limited number of pixels. Overall, this work establishes a basis for future studies to assess the effect of visual environment on human perception using non-intrusive measurements. It also provides the theoretical foundation for a connected paper [<span><span>27</span></span>], which demonstrates that non-intrusive measurements and deep learning techniques can be used to discover daylight preferences and enable AI-assisted daylighting operation.</div></div>\",\"PeriodicalId\":9273,\"journal\":{\"name\":\"Building and Environment\",\"volume\":\"287 \",\"pages\":\"Article 113798\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Building and Environment\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0360132325012685\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CONSTRUCTION & BUILDING TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Building and Environment","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360132325012685","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CONSTRUCTION & BUILDING TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

视场(FOV)内的亮度监测是评估视觉舒适度和整体视觉偏好所必需的,但它在实践中具有挑战性和侵入性。因此,实时的、以人为中心的采光操作仍然是一个挑战。本文提出了一种新的基于深度学习的框架方法,以证明可以在不进行侵入性测量的情况下提取乘员视野中的有意义的特征。这是第一个概念证明,使用集成了深度学习神经网络的非侵入式相机来监测人们感知到的亮度分布是可行的。pix2pix是一种条件生成对抗网络(CGAN),用于将信息从非侵入图像传输到FOV图像。在开放式办公室中收集两个数据集,并在两个替代位置(墙壁或监视器)安装紧凑,低成本的高动态范围图像(HDRI)摄像机,分别训练两个具有相同目标FOV图像的pix2pix模型。结果表明,生成的视场图像在像素亮度误差、平均亮度和结构相似度方面与实测视场图像非常接近。主要的误差是由于明亮的场景,透过窗户可见,局限于非常有限的像素数量。总的来说,这项工作为未来使用非侵入性测量来评估视觉环境对人类感知的影响奠定了基础。它还为一篇相关论文[27]提供了理论基础,该论文表明,非侵入式测量和深度学习技术可用于发现日光偏好,并实现人工智能辅助采光操作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Non-invasive assessment of the visual environment using conditional generative adversarial networks
Luminance monitoring within the field of view (FOV) is required for assessing visual comfort and overall visual preferences, but it is practically challenging and intrusive. As a result, real-time, human-centered daylighting operation remains a challenge. This paper presents a novel deep-learning based framework method to demonstrate that meaningful features in the occupant’s visual field can be extracted without invasive measurements. It is the first proof of concept to show that it is feasible to monitor luminance distributions as perceived by people, using a non-intrusive camera integrated with deep learning neural networks. A Conditional Generative Adversarial Network (CGAN), pix2pix is used to transfer information from non-intrusive images to FOV images. Two datasets were collected in an open-plan office with compact, low-cost High Dynamic Range Image (HDRI) cameras installed at two alternate locations (a wall or a monitor), to separately train two pix2pix models with the same target FOV images. The results show that the generated FOV images closely resemble the measured FOV images in terms of pixelwise luminance errors, mean luminance, and structural similarity. The main errors are due to bright scenes, visible through windows, confined to a very limited number of pixels. Overall, this work establishes a basis for future studies to assess the effect of visual environment on human perception using non-intrusive measurements. It also provides the theoretical foundation for a connected paper [27], which demonstrates that non-intrusive measurements and deep learning techniques can be used to discover daylight preferences and enable AI-assisted daylighting operation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Building and Environment
Building and Environment 工程技术-工程:环境
CiteScore
12.50
自引率
23.00%
发文量
1130
审稿时长
27 days
期刊介绍: Building and Environment, an international journal, is dedicated to publishing original research papers, comprehensive review articles, editorials, and short communications in the fields of building science, urban physics, and human interaction with the indoor and outdoor built environment. The journal emphasizes innovative technologies and knowledge verified through measurement and analysis. It covers environmental performance across various spatial scales, from cities and communities to buildings and systems, fostering collaborative, multi-disciplinary research with broader significance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信