DisplaysPub Date : 2025-02-24DOI: 10.1016/j.displa.2025.103008
Zhao Yanzeng , Zhu Keyong , Xu Haixin , Liu Ziyu , Luo Pengyu , Wang Lijing
{"title":"Is red alert always optimal? An empirical study on the effects of red and blue feedback on performance under excessive stress","authors":"Zhao Yanzeng , Zhu Keyong , Xu Haixin , Liu Ziyu , Luo Pengyu , Wang Lijing","doi":"10.1016/j.displa.2025.103008","DOIUrl":"10.1016/j.displa.2025.103008","url":null,"abstract":"<div><h3>Background</h3><div>In critical situations, a pilot’s ability to recognize and adjust excessive stress levels is vital for risk mitigation, especially in single pilot operations where self-awareness is crucial. Research on monitoring and feedback for excessive stress is limited, and few studies have examined how visual feedback from display interfaces can enhance pilot performance. Traditional alert interfaces predominantly use red feedback, but the unique cognitive characteristics associated with excessive stress may lead to negative outcomes when red feedback is employed. Therefore, it is essential to investigate the effectiveness of feedback under these conditions.</div></div><div><h3>Methods</h3><div>This study utilized the MATB (Multi-tasking Ability Task Battery), an effective abstract flight task experimental prototype with stress inducing in participants through the TSST (Trier Social Stress Test) paradigm. The categories of stress were assessed using the Yerkes-Dodson Law. Audio signal was used to train a Probabilistic Neural Network (PNN) model for real-time discrimination of excessive stress levels, providing participants with one of three types of visual feedback: no feedback, red feedback, or blue feedback. The experiment was designed with an within-subjects approach, involving 20 participants.</div></div><div><h3>Results</h3><div>There were no significant differences in primary task performance. However, secondary task performance was significantly poor under red feedback compared to blue feedback. Additionally, there were no significant differences between red feedback and no feedback conditions.</div></div><div><h3>Conclusion</h3><div>This study suggests that feedback for excessive stress should take into account its unique characteristics, recommending caution in the use of red alerts. The findings provide valuable insights for future human–computer interface design.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103008"},"PeriodicalIF":3.7,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-02-24DOI: 10.1016/j.displa.2025.103005
Cui Gan, Chaofeng Li, Gangping Zhang, Guanghua Fu
{"title":"DBNDiff: Dual-branch network-based diffusion model for infrared ship image super-resolution","authors":"Cui Gan, Chaofeng Li, Gangping Zhang, Guanghua Fu","doi":"10.1016/j.displa.2025.103005","DOIUrl":"10.1016/j.displa.2025.103005","url":null,"abstract":"<div><div>Infrared ship image super-resolution (SR) is important for dim and small ship object detection and tracking. However, there are still challenges for large-scale factors SR of infrared ship images, as infrared images require a greater amount of global edge information compared to visible images. To overcome this challenge, we introduce a novel dual-branch network-based diffusion model (DBNDiff) for infrared ship image SR, which incorporates a noise prediction (NP) branch and an edge reconstruction (ER) branch within its conditional noise prediction network (CNPN). In the NP branch, to perform better noise prediction, a hybrid cross-attention (HCA) block is used for the interaction between global and local information. In the ER branch, ER blocks are stacked to extract edge information. Furthermore, an edge loss function is introduced to preserve more edges and details. Extensive experiments on infrared ship image datasets highlight that our DBNDiff outperforms other SR methods, especially showing the best visual quality at large-scale factors SR tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103005"},"PeriodicalIF":3.7,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DARF: Depth-Aware Generalizable Neural Radiance Field","authors":"Yue Shi, Dingyi Rong, Chang Chen, Chaofan Ma, Bingbing Ni, Wenjun Zhang","doi":"10.1016/j.displa.2025.102996","DOIUrl":"10.1016/j.displa.2025.102996","url":null,"abstract":"<div><div>Neural Radiance Field (NeRF) has revolutionized novel-view rendering tasks and achieved impressive results. However, the inefficient sampling and per-scene optimization hinder its wide applications. Though some generalizable NeRFs have been proposed, the rendering quality is unsatisfactory due to the lack of geometry and scene uniqueness. To address these issues, we propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy to perform efficient novel view rendering and unsupervised depth estimation on unseen scenes without per-scene optimization. Distinct from most existing generalizable NeRFs, our framework infers the unseen scenes on both pixel level and geometry level with only a few input images. By introducing a pre-trained depth estimation module to derive the depth prior, narrowing down the ray sampling interval to the proximity space of the estimated surface, and sampling in expectation maximum position, we preserve scene characteristics while learning common attributes for novel-view synthesis. Moreover, we introduce a Multi-level Semantic Consistency loss (MSC) to assist with more informative representation learning. Extensive experiments on indoor and outdoor datasets show that compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation. Our code is available on <span><span>https://github.com/shiyue001/DARF.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 102996"},"PeriodicalIF":3.7,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-level perturbations in image and feature spaces for semi-supervised medical image segmentation","authors":"Feiniu Yuan , Biao Xiang , Zhengxiao Zhang , Changhong Xie , Yuming Fang","doi":"10.1016/j.displa.2025.103001","DOIUrl":"10.1016/j.displa.2025.103001","url":null,"abstract":"<div><div>Consistency regularization has emerged as a vital training strategy for semi-supervised learning. It is very important for medical image segmentation due to rare labeled data. To greatly enhance consistency regularization, we propose a novel Semi-supervised Learning framework with Multi-level Perturbations (SLMP) in both image and feature spaces. In image space, we propose external perturbations with three levels to greatly increase data variations. A low-level perturbation uses traditional augmentation techniques for firstly expanding data. Then, a middle-level one adopts copying and pasting techniques to combine low-level augmented versions of labeled and unlabeled data for generating new images. Middle-level perturbed images contain novel contents, which are totally different from original ones. Finally, a high-level one generates images from middle-level augmented data. In feature space, we design an Indicative Fusion Block (IFB) to propose internal perturbations for randomly mixing the encoded features of middle and high-level augmented images. By utilizing multi-level perturbations, we design a student–teacher semi-supervised learning framework for effectively improving the model resilience to strong variances. Experimental results show that our model achieves the state-of-the-art performance across various evaluation metrics on 2D and 3D medical image datasets. Our model exhibits the powerful capability of feature learning, and significantly outperforms existing state-of-the-art methods. Intensive ablation studies prove that our contributions are effective and significant. The model code is available at <span><span>https://github.com/CamillerFerros/SLMP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103001"},"PeriodicalIF":3.7,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-02-19DOI: 10.1016/j.displa.2025.102995
Wenyang Wang , Peng Zhang , Yuanxin Wang , Shoufeng Tong
{"title":"Investigation into preventing piracy based on the temporal perception difference between devices and humans using modulated projection light","authors":"Wenyang Wang , Peng Zhang , Yuanxin Wang , Shoufeng Tong","doi":"10.1016/j.displa.2025.102995","DOIUrl":"10.1016/j.displa.2025.102995","url":null,"abstract":"<div><div>The piracy of intellectual property rights and privacy through unauthorized capturing of images and videos has been increasing rapidly. We introduce a new methodology for preventing piracy, that utilizes a light source that emits specially modulated projection light to embed imperceptible watermark patterns in images and videos, thereby degrading their quality. The modulation ways of light source exploit the temporal perception difference between the human visual system (HVS) and the image sensor devices. We employed a model-driven approach to optimize the modulation ways of light source in order to effectively prevent piracy. We have also designed experiments to discuss the degradation of image quality at different factors and evaluate the effectiveness of the proposed method. Extensive objective evaluations under different scenarios demonstrated that our method can effectively prevent piracy on various smartphones. Subjective tests on volunteers demonstrated that the modulated light source appears to the HVS to be the same as a steady light source.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 102995"},"PeriodicalIF":3.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-02-18DOI: 10.1016/j.displa.2025.103004
Sahul Hameed Syed Ali, Seung-Ho Hong, Jang-Kun Song
{"title":"Large aperture nano-colloidal lenses with dual-hole electrodes for reduced image distortion","authors":"Sahul Hameed Syed Ali, Seung-Ho Hong, Jang-Kun Song","doi":"10.1016/j.displa.2025.103004","DOIUrl":"10.1016/j.displa.2025.103004","url":null,"abstract":"<div><div>Focus-tunable lenses without mechanical components are highly beneficial across various fields, including augmented reality (AR) devices, yet achieving a practical level of this technology is challenging. Recently, nano-colloidal lenses employing two-dimensional (2D) ZrP nanoparticles have been proposed as a simple and promising method to develop an electric-field-induced focus-tunable lens system. In this study, we investigate the relationship between the electrode design of nano-colloidal lenses and their performance, particularly in terms of focal length tunability and image distortion. In previous designs, increasing the lens size led to significant image distortion. To address this issue, we introduced a dual-hole electrode design and optimized the electrode size. This modification resulted in a wider focal length tunability and minimized image distortion, even in larger lenses. Additionally, we experimentally measured the refractive index variation and approximated the nanoparticle distribution to further optimize the lens’s focal length and image distortion. Consequently, this study provides a comprehensive model for designing nano-colloidal lenses and electrodes, paving the way for their use in various applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103004"},"PeriodicalIF":3.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multimodal deep learning framework for automated major adverse cardiovascular events prediction in patients with end-stage renal disease integrating clinical and cardiac MRI data","authors":"Xinyue Sun , Siyu Guan , Lianming Wu , Tianyi Zhang , Liang Ying","doi":"10.1016/j.displa.2025.102998","DOIUrl":"10.1016/j.displa.2025.102998","url":null,"abstract":"<div><div>Making the accurate prediction of Major Adverse Cardiovascular Events (MACE) in End-Stage Renal Disease (ESRD) patients is crucial for early intervention and clinical management. Traditional methods for MACE prediction are limited by measurement precision and annotation difficulties, resulting in inherent uncertainties in prediction accuracy. To address these challenges, this paper proposes an automatic multimodal deep learning framework that integrates clinical data, patient history, and cardiac magnetic resonance imaging (MRI) data to precisely predict the probability of MACE in ESRD patient occurrence. The system employs automatic seed generation and region localization on 2D slices, followed by 3D convolutional neural networks (CNNs) to extract both local and global features. Additionally, it incorporates clinical test indicators and medical history data, optimizing the weight distribution among features through a gating mechanism. This approach significantly enhances the accuracy and efficiency of MACE in ESRD patient prediction, demonstrating excellent performance on the dataset composed of 176 cardiovascular cases, with an average accuracy of 0.82 in five-fold cross-validation. It is capable of processing large-scale data without requiring physician involvement in labeling, offering substantial potential for clinical application.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 102998"},"PeriodicalIF":3.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Penta-channel waveguide-based near-eye display with two-dimensional pupil expansion","authors":"Chao Ping Chen, Xiaojun Wu, Jinfeng Wang, Baoen Han, Yunfan Yang, Shuxin Liu","doi":"10.1016/j.displa.2025.102999","DOIUrl":"10.1016/j.displa.2025.102999","url":null,"abstract":"<div><div>We present a penta-channel waveguide-based near-eye display as an ultra-wide-angle architecture for the metaverse. The core concept is to divide one field of view into five by placing the couplers within the regions, where only the subsets of field of view are located. Compared to its counterparts, including the single, double, triple and quad channels, our penta-channel waveguide can push the envelope of field of view further. With the aid of <em>k</em>-space diagram, the upper limit of field of view is illustrated and deduced. The design rules of the waveguide, 4-level grating as the in-coupler, and two-dimensional binary grating as the out-coupler are expounded. Through the rigorous coupled-wave analysis, the efficiencies of gratings can be calculated and optimized. As an overall evaluation, its key performance indicators are summarized as follows. Field of view is 109° (diagonal), eye relief is 10 mm, exit pupil is 6.2 × 6.2 mm<sup>2</sup>, and pupil uniformity is 54 %.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 102999"},"PeriodicalIF":3.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ASR-NeSurf: Alleviating structural redundancy in neural surface reconstruction for deformable endoscopic tissues by validity probability","authors":"Qian Zhang , Jianping Lv , Jia Gu , Yingtian Li , Wenjian Qin","doi":"10.1016/j.displa.2025.103000","DOIUrl":"10.1016/j.displa.2025.103000","url":null,"abstract":"<div><div>Accurate reconstruction of dynamic mesh models of human deformable soft tissues in surgical scenarios is critical for a variety of clinical applications. However, due to the challenges of limited sparse views, weak image texture information, uneven illumination intensity and large lens distortion in endoscopic video, the traditional 3D reconstruction methods based on depth estimation and SLAM fail to accurate surface reconstruction. Existing neural radiance field methods, such as Endosurf, have been developed for this problem, while these methods still suffer from inaccurate generation of mesh models with structural redundancy due to limited sparse views. In this paper, we propose a novel neural surface reconstruction method for deformable soft tissues from endoscopic videos, named ASR-NeSurf. Specifically, our approach modifies the volume rendering process by introducing the neural validity probability field to predict the probability of redundant structures. Further, unbiased validity probability volume rendering is employed to generate high-quality geometry and appearance. Experiments on three public datasets with variation of sparse-view and different degrees of deformation demonstrate that ASR-NeSurf significantly outperforms the state-of-the-art neural-field-based method, particularly in reconstructing high-fidelity mesh models.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103000"},"PeriodicalIF":3.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-02-16DOI: 10.1016/j.displa.2025.102997
Tianxi Yang , Jie Sun , Yijian Zhou , Yuchen Lu , Jin Li , Zhonghang Huang , Chang Lin , Qun Yan
{"title":"2822 PPI active matrix micro-LED display fabricated via Au-Au micro-bump bonding technology","authors":"Tianxi Yang , Jie Sun , Yijian Zhou , Yuchen Lu , Jin Li , Zhonghang Huang , Chang Lin , Qun Yan","doi":"10.1016/j.displa.2025.102997","DOIUrl":"10.1016/j.displa.2025.102997","url":null,"abstract":"<div><div>Currently, due to their cost-effectiveness and excellent physical properties, indium and tin are frequently utilized as bump materials for micro-light emitting diodes (Micro-LEDs) and silicon complementary metal–oxide–semiconductor (CMOS) devices to realize flip-chip bonding technology. However, as micro-LED pixel sizes and spacings decrease, forming indium and tin bumps that meet bonding requirements becomes challenging. These bumps are difficult to form an ideal spherical shape in the reflow process and easy to cause interconnection problems between adjacent pixels, adversely affecting device performance. To address this, we propose a novel Au-Au bump technology for micro-LED flip-chip bonding. This technology aims to effectively avoid interconnection issues while simplifying the micro-LED process flow and reducing production costs. Therefore, this paper designed a micro-LED device with 2822 PPI, 640 × 360 resolution, and 9 μm pixel pitch to verify the feasibility of Au-Au micro-bump bonding. During this process, Au bump with diameter of 3.9 μm and 6.5 μm were fabricated for micro-LED array and CMOS driver chip respectively, followed by integrating them using the flip-chip bonding process. Cross-sectional analysis confirmed the high reliability and stability of the Au-Au connection, enabling the micro-LED device to function properly. Furthermore, the Au bump micro-LED exhibits greater electroluminescence (EL) intensity and brightness than the In bump micro-LED, potentially due to the optical losses incurred during the preparation of indium bumps within the micro-LED chip.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102997"},"PeriodicalIF":3.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143436887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}