Displays最新文献

筛选
英文 中文
Dual discriminator GANs with multi-focus label matching for image-aware layout generation
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-31 DOI: 10.1016/j.displa.2025.102970
Chenchen Xu , Kaixin Han , Min Zhou , Weiwei Xu
{"title":"Dual discriminator GANs with multi-focus label matching for image-aware layout generation","authors":"Chenchen Xu ,&nbsp;Kaixin Han ,&nbsp;Min Zhou ,&nbsp;Weiwei Xu","doi":"10.1016/j.displa.2025.102970","DOIUrl":"10.1016/j.displa.2025.102970","url":null,"abstract":"<div><div>Image-aware layout generation involves arranging graphic elements, including logo, text, underlay, and embellishment, at the appropriate position on the canvas, constituting a fundamental step in poster design. This task requires considering both the relationships among elements and the interaction between elements and images. However, existing layout generation models struggle to simultaneously satisfy explicit aesthetic principles like alignment and non-overlapping, along with implicit aesthetic principles related to the harmonious composition of images and elements. To overcome these challenges, this paper designs a GAN with dual discriminators, called DD-GAN, to generate graphic layouts according to image contents. In addition, we introduce a multi-focus label matching method to provide richer supervision and optimize model training. The incorporation of multi-focus label matching not only accelerates convergence during training but also enables the model to better capture both explicit and implicit aesthetic principles in image-aware layout generation. Quantitative and qualitative evaluations consistently demonstrate that DD-GAN, coupled with multi-focus label matching, achieves state-of-the-art performance, producing high-quality image-aware graphic layouts for advertising posters.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102970"},"PeriodicalIF":3.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143281163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design methodology and evaluation of multimodal interaction for enhancing driving safety and experience in secondary tasks of IVIS
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-31 DOI: 10.1016/j.displa.2025.102991
Jun Ma , Yuanyang Zuo , Zaiyan Gong , Yiyuan Meng
{"title":"Design methodology and evaluation of multimodal interaction for enhancing driving safety and experience in secondary tasks of IVIS","authors":"Jun Ma ,&nbsp;Yuanyang Zuo ,&nbsp;Zaiyan Gong ,&nbsp;Yiyuan Meng","doi":"10.1016/j.displa.2025.102991","DOIUrl":"10.1016/j.displa.2025.102991","url":null,"abstract":"<div><div>Human-computer interaction (HCI) of in-vehicle information systems (IVISs) is crucial to the safety and experience of drivers. However, the operation of secondary tasks of IVIS will cause distraction when driving. This paper aims to reduce driving distractions by multimodal interaction design, to enhance the user’s driving safety and interaction experience. Firstly, we analyze the secondary task and the associated interaction modes and develop a theory tool for multimodal interaction design modes. Then, three multimodal interaction schemes of typical secondary tasks are designed. Finally, the data of three evaluation indexes, namely expert evaluation, lane position, and user score, are obtained through expert interviews, simulated driving, and user questionnaires. The results demonstrate that the proposed multimodal interaction design of the secondary task is generally superior to the traditional voice and screen interaction, which is conducive to improving driving safety, performance, and interaction experience. The theoretical framework presented in this work provides a potential opportunity for the expansion of theoretical methods and applications of multimodal interaction design for secondary tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102991"},"PeriodicalIF":3.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143162411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Lottery Ticket Hypothesis in Neural Video Representations
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-30 DOI: 10.1016/j.displa.2025.102982
Jiacong Chen , Qingyu Mao , Shuai Liu , Fanyang Meng , Shuangyan Yi , Yongsheng Liang
{"title":"Exploring Lottery Ticket Hypothesis in Neural Video Representations","authors":"Jiacong Chen ,&nbsp;Qingyu Mao ,&nbsp;Shuai Liu ,&nbsp;Fanyang Meng ,&nbsp;Shuangyan Yi ,&nbsp;Yongsheng Liang","doi":"10.1016/j.displa.2025.102982","DOIUrl":"10.1016/j.displa.2025.102982","url":null,"abstract":"<div><div>Neural Video Representations (NVR) have emerged as a novel method for video compression, which encode video content into network parameters, transforming video compression into a model compression issue. In this paper, we attempt to introduce Lottery Ticket Hypothesis (LTH) into NVR, aiming to identify optimal sub-networks (<span><math><mrow><mi>i</mi><mo>.</mo><mi>e</mi><mo>.</mo></mrow></math></span>, winning tickets) capable of effective representing corresponding videos. Firstly, we validate the existence of winning tickets within NVR and reveal that these winning tickets exhibit non-transferability across different videos. To balance heavy searching cost with training efficiency, we devise an Identifying Winning Tickets (IWT) method tailored for NVR. Additionally, leveraging the non-transferability of winning tickets and video frames redundancy, we design a Progressive Cyclic Learning (PCL) strategy to accelerate the searching winning tickets phase. Finally, comprehensive experiments are conducted to evaluate the performance and general properties of winning tickets across various NVR architectures and videos. The results demonstrate that our proposed method significantly outperforms the original pruning approach, achieving performance gains of 1.81 dB, 4.19 dB, and 3.5 dB at 90 % sparsity in NeRV, E-NeRV, and HNeRV, respectively.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102982"},"PeriodicalIF":3.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143280815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-transferability black-box attack of binary image segmentation via adversarial example augmentation
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-30 DOI: 10.1016/j.displa.2024.102957
Xuebiao Zhu, Wu Chen, Qiuping Jiang
{"title":"High-transferability black-box attack of binary image segmentation via adversarial example augmentation","authors":"Xuebiao Zhu,&nbsp;Wu Chen,&nbsp;Qiuping Jiang","doi":"10.1016/j.displa.2024.102957","DOIUrl":"10.1016/j.displa.2024.102957","url":null,"abstract":"<div><div>The application of deep neural networks (DNNs) has significantly advanced the binary image segmentation (BIS) task. However, DNNs have been found to be susceptible to adversarial attacks involving subtle perturbations. The existing black-box attack methods usually generate one single adversarial example for different target models, leading to poor transferability. To address this issue, this paper proposes a novel adversarial example augmentation (AEA) framework to improve the transferability of black-box attacks. Our method dedicates to generating an adversarial example set (AES) which contains a set of distinct adversarial examples. Specifically, we first employ an existing model as the surrogate model which is attacked to optimize the adversarial perturbation via maximizing the Binary Cross-Entropy (BCE) loss between the prediction of the surrogate model and the pseudo label, thus producing a sequence of adversarial examples. During the optimization process, besides the BCE loss, we additionally introduce deep feature losses among different adversarial examples to fully distinguish the generated adversarial examples. In this way, we can obtain an AES that contains different adversarial examples with diverse deep features to achieve the augmentation of adversarial examples. Given the diversity of the generated adversarial examples in the AES of the surrogate model, the optimal adversarial example for a certain target model is likely contained in our generated AES. Thus, the generated AES is expected to have high-transferability. In order to find the optimal adversarial example of a specific target model in the AES, we use the query method to achieve this goal. Experimental results showcase the superiority of the proposed AEA framework for black-box attack in two representative BIS tasks including salient object detection and camouflage object detection.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102957"},"PeriodicalIF":3.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human pose estimation via inter-view image similarity with adaptive weights
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-30 DOI: 10.1016/j.displa.2025.102972
Yang Gao, Shigang Wang, Zhiyuan Zha
{"title":"Human pose estimation via inter-view image similarity with adaptive weights","authors":"Yang Gao,&nbsp;Shigang Wang,&nbsp;Zhiyuan Zha","doi":"10.1016/j.displa.2025.102972","DOIUrl":"10.1016/j.displa.2025.102972","url":null,"abstract":"<div><div>Human pose estimation has garnered considerable interest in computer vision. However, in real-world scenarios, human joint points often experience occlusion from clothing, body parts, and objects, which can decrease the accuracy of detecting and tracking the joint points. In this paper, we propose a novel inter-view image similarity with adaptive weights (IVIM-AW) approach for human pose estimation, which leverages the consistency and complementarity of multiple views to enhance the beneficial information obtained from other views. First, we design a dynamic adjustment mechanism to optimize the fusion weights within the Siamese network framework, making it more adaptable to the feature similarities of different views. Second, we propose an information consistency measurement strategy for multi-view images using a similarity matrix. Third, we leverage the sparse characteristics of heatmaps to achieve point-to-point matching during the multi-view fusion process. Experimental results demonstrate that the proposed IVIM-AW approach outperforms many popular or state-of-the-art methods on most public occlusion datasets. Notably, in the occlusion-person dataset, the IVIM-AW approach achieves the lowest mean joint estimation error, reducing the Mean Per Joint Position Error (MPJPE) to 9.24 mm.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102972"},"PeriodicalIF":3.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143162407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integral imaging 3D display using triple-focal microlens arrays for near-eye display with enhanced depth of field
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-27 DOI: 10.1016/j.displa.2025.102986
Yuyan Peng , Wenwen Wang , Jiazhen Zhang , Zhenyou Zou , Chunliang Chen , Xiongtu Zhou , Tailiang Guo , Qun Yan , Yongai Zhang , Chaoxing Wu
{"title":"Integral imaging 3D display using triple-focal microlens arrays for near-eye display with enhanced depth of field","authors":"Yuyan Peng ,&nbsp;Wenwen Wang ,&nbsp;Jiazhen Zhang ,&nbsp;Zhenyou Zou ,&nbsp;Chunliang Chen ,&nbsp;Xiongtu Zhou ,&nbsp;Tailiang Guo ,&nbsp;Qun Yan ,&nbsp;Yongai Zhang ,&nbsp;Chaoxing Wu","doi":"10.1016/j.displa.2025.102986","DOIUrl":"10.1016/j.displa.2025.102986","url":null,"abstract":"<div><div>As the entrance to the metaverse, the near-eye display (NED) technology based on naked eye three-dimensional (3D) display can transform how people interact with digital information as a possible next-generation display. Depth of field (DoF) is an essential indicator for viewers to explore 3D display scenarios, but traditional devices for improving the DoF of integral imaging (II) naked eye 3D display cannot be well incorporated with a high-integrated NED. To improve II’s DoF for NED, we propose triple-focal-length microlens array (TFL-MLA) with a honeycomb layout. The TFL-MLA is fabricated utilizing multilayer photolithography and thermal reflow. Microlenses with varying focal lengths are achieved on a single glass wafer using three photomasks with alignment marks. The results indicate that multi-layer microlenses can be precisely positioned with alignment marks. Furthermore, altering the thickness of the photoresist enables height control in the TFL-MLA. The prepared TFL-MLA has good morphology, with a diameter of approximately 208.3 ± 2.6 µm and three heights of 28.8 ± 0.7 µm, 23.1 ± 0.6 µm, and 13.7 ± 0.3 µm. The TFL-MLA provides excellent focusing and imaging in three focal planes (focal lengths of 368.8 μm, 441.1 μm, and 701.6 μm). As a proof-of-concept, a DoF-enhanced NED system based on II utilizing the TFL-MLA is implemented, allowing a clear presentation of 3D objects in various center depth planes. The proposed NED’s DoF has increased from 154 mm to 541.8 mm. This TFL-MLA is anticipated to promote the evolution of NED with diverse DoF, boosting the development of the metaverse.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102986"},"PeriodicalIF":3.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143162401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV autonomous obstacle avoidance via causal reinforcement learning
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-27 DOI: 10.1016/j.displa.2025.102966
Tao Sun, Jiaojiao Gu, Junjie Mou
{"title":"UAV autonomous obstacle avoidance via causal reinforcement learning","authors":"Tao Sun,&nbsp;Jiaojiao Gu,&nbsp;Junjie Mou","doi":"10.1016/j.displa.2025.102966","DOIUrl":"10.1016/j.displa.2025.102966","url":null,"abstract":"<div><div>The role of unmanned aerial vehicles (UAVs) in everyday life is becoming increasingly important, and there is a growing demand for UAVs to autonomously perform obstacle avoidance and navigation tasks. Traditional UAV navigation methods typically divide the navigation problem into three stages: perception, mapping, and path planning. However, this approach significantly increases processing delays, causing UAVs to lose their agility advantage. In this paper, we propose a causal reinforcement learning-based end-to-end navigation strategy that directly learns from data, bypassing the explicit mapping and planning steps, thus enhancing responsiveness. To address the issue where using a continuous action space prevents the agent from learning effective experiences from past actions, we introduce an Actor–Critic method with a fixed horizontal plane and a discretized action space. This approach enhances the efficiency of sampling from the experience replay buffer and stabilizes the optimization process, ultimately improving the success rate of the reinforcement learning algorithm in UAV obstacle avoidance and navigation tasks. Furthermore, to overcome the limited generalization capability of end-to-end methods, we incorporate causal inference into the reinforcement learning training process. This step mitigates overfitting caused by insufficient interaction with the environment during training, thereby increasing the success rate of UAVs in performing obstacle avoidance and navigation tasks in unfamiliar environments. We validate the effectiveness of causal inference in improving the generalization capability of the reinforcement learning algorithm by using convergence steps in the training environment and navigation success rates of random targets in the testing environment as quantitative metrics. The results demonstrate that causal inference can effectively reduce overfitting of the policy network to the training environment.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102966"},"PeriodicalIF":3.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143162397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated dual-mode system of communication and display based on single-pixel imaging
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-26 DOI: 10.1016/j.displa.2025.102984
Yi Kang , Wenqing Zhao , Shengli Pu , Dawei Zhang
{"title":"Integrated dual-mode system of communication and display based on single-pixel imaging","authors":"Yi Kang ,&nbsp;Wenqing Zhao ,&nbsp;Shengli Pu ,&nbsp;Dawei Zhang","doi":"10.1016/j.displa.2025.102984","DOIUrl":"10.1016/j.displa.2025.102984","url":null,"abstract":"<div><div>The integrated mode of communication and display effectively utilizes spatial–temporal flicker-fusion property of the human vision system and the fast frame rate of modern display to achieve additional information transmission, and expands the application scene of visible light communication. However, the frame rate limitations of the camera and the asynchrony between transmitter and receiver seriously restrict the transmission rate and accuracy. In this study, we achieved the integration of communication and display within a single-pixel imaging framework by constructing the spatial–temporal complementary modulation mode and image display mode. Meanwhile, leveraging the characteristic of single-pixel imaging that detects only light intensity, and combining it with the image display mode, we developed an internal synchronization mechanism. This mechanism enables the precise extraction of data frames and the accurate reconstruction of transmitted data. The experimental results demonstrate that this scheme can achieve the transmission of additional information while provide high fidelity display. The image-to-data frame ratio is 8:1, and the maximum error-free field angle is 170°. Compared to other published methods, this scheme possesses a stronger noise resistance and a broader transmission range, which presenting a novel approach for integrated communication and display, and also extending application field of visible light communication.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102984"},"PeriodicalIF":3.7,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143162406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-stage underwater image enhancement using domain adaptation and interlacing transformer
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-25 DOI: 10.1016/j.displa.2025.102980
Haneum Lee, Sanggil Kang
{"title":"Two-stage underwater image enhancement using domain adaptation and interlacing transformer","authors":"Haneum Lee,&nbsp;Sanggil Kang","doi":"10.1016/j.displa.2025.102980","DOIUrl":"10.1016/j.displa.2025.102980","url":null,"abstract":"<div><div>Underwater images typically suffer from quality defects such as impurities, light scattering, absorption, color casts, etc. Various underwater image enhancement (UIE) techniques based on deep learning have recently been proposed and have achieved remarkable results, but they still have qualitative difficulty from irregular color channel loss. This problem is usually caused by the difficulty of distinguishing between the color cast of the underwater and the irregular scattering of the object. In this work, we propose a two-stage approach to refine domain information and semantic information, respectively: domain adaptation network and image enhancement network. In the domain adaptation network, we introduce Sobel sparse attention, which can separate the semantic information of underwater images and remove domain information. A U-shape architecture of the transformer is designed for efficient decoding of the image enhancement network. In addition, the interlacing position embedding is adopted, which can solve the scattering and blurring of light that occurs locally. To validate our proposed method, we conducted various experiments using non-reference, full-reference, and synthetic underwater images. The experimental results demonstrate that our method outperforms the state-of-the-art qualitatively and quantitatively.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102980"},"PeriodicalIF":3.7,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143162400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ColorBoost-LLIE: A multi-loss guided low-light image enhancement algorithm with decoupled color and luminance restoration
IF 3.7 2区 工程技术
Displays Pub Date : 2025-01-24 DOI: 10.1016/j.displa.2025.102979
Xiaoyang Shen , Haibin Li , Yaqian Li , Wenming Zhang
{"title":"ColorBoost-LLIE: A multi-loss guided low-light image enhancement algorithm with decoupled color and luminance restoration","authors":"Xiaoyang Shen ,&nbsp;Haibin Li ,&nbsp;Yaqian Li ,&nbsp;Wenming Zhang","doi":"10.1016/j.displa.2025.102979","DOIUrl":"10.1016/j.displa.2025.102979","url":null,"abstract":"<div><div>Low-light image enhancement (LLIE) aims to improve image brightness and achieve natural-looking images under normal lighting conditions. Existing low-light image enhancement algorithms primarily focus on increasing image brightness, significantly improving the quality of enhanced low-light images. However, these images often suffer from varying degrees of color distortion. To address this issue, we have designed a neural network that separately processes image brightness and color information through distinct sub-networks. Specifically, the input low-light image is converted from the RGB color space to the HSV and Lab color spaces. We then extract features from the brightness and color channels both globally and locally. Finally, the extracted features are fused and decoded to produce the enhanced image. Additionally, we introduce edge loss and color loss functions combined with histogram matching loss and perceptual loss to optimize the training process of the model. By enhancing the restoration of edge and color information, the resulting enhanced images exhibit natural colors with clear details and textures. Extensive experiments demonstrate the effectiveness of our proposed algorithm, particularly in color recovery during low-light image enhancement. It achieves a PSNR of 38.40 and an LPIPS of 0.16 on the LSRW-Huawei dataset and a PSNR, SSIM, and LPIPS of 37.85, 0.85, and 0.14 respectively on the LOL-v2 dataset, Additionally, the algorithm demonstrates impressive enhancement performance on the RichIQA metric (<em>Exploring Rich Subjective Quality Information for Image Quality Assessment in the Wild</em>) and the NLIEE metric (<em>A No-Reference Evaluation Metric for Low-Light Image Enhancement</em>).</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102979"},"PeriodicalIF":3.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信