DisplaysPub Date : 2025-07-21DOI: 10.1016/j.displa.2025.103161
Pengjun Wu , Wencui Zhang , Peiyuan Li , Yao Liu
{"title":"The application of VR in interior design education to enhance design effectiveness and student experience","authors":"Pengjun Wu , Wencui Zhang , Peiyuan Li , Yao Liu","doi":"10.1016/j.displa.2025.103161","DOIUrl":"10.1016/j.displa.2025.103161","url":null,"abstract":"<div><div>As Interior Design Education (IDE) evolves to meet increasingly complex and diverse demands, traditional teaching methods face limitations in areas such as design presentation, teacher–student interaction, and spatial perception, often leading to reduced learning effectiveness. Virtual Reality (VR), with its immersive and interactive features, offers promising solutions to these challenges. This study developed a VR-based interior design education platform incorporating Level of Detail (LOD) technology to improve instructional precision and learning outcomes. To systematically evaluate the teaching effectiveness, the study employed evaluation indicators grounded in the Technology Acceptance Model (TAM), emphasizing perceived usefulness and perceived ease of use as key dimensions influencing learners’ acceptance of VR technology. Specifically, content comprehensiveness, visual clarity, and spatial understanding were selected as core evaluation metrics reflecting these TAM constructs. An experimental comparison with traditional teaching methods assessed these dimensions. Results showed the VR-based approach significantly outperformed traditional methods, with higher average scores in comprehensiveness (90.68 ± 4.00 vs. 82.35 ± 2.20), visibility (91.08 ± 4.11 vs. 83.66 ± 3.85), and spatial effects (92.98 ± 3.22 vs. 85.64 ± 3.96). These findings highlight the advantages of LOD-enhanced VR teaching in improving clarity and interaction efficiency. Focus group interviews further confirmed its effectiveness in enhancing students’ understanding and communication.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103161"},"PeriodicalIF":3.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-20DOI: 10.1016/j.displa.2025.103169
Chuandong Tan , Chao Long , Yarui Xi , Zhiting Chen , Xinxin Lin , Fenglin Liu , Yufang Cai , Liming Duan
{"title":"Orthogonal translation computed laminography reconstruction based on self-prior information and adaptive weighted total variation","authors":"Chuandong Tan , Chao Long , Yarui Xi , Zhiting Chen , Xinxin Lin , Fenglin Liu , Yufang Cai , Liming Duan","doi":"10.1016/j.displa.2025.103169","DOIUrl":"10.1016/j.displa.2025.103169","url":null,"abstract":"<div><div>Orthogonal translation computed laminography (OTCL) provides an effective non-destructive testing method for plate-like objects. Nevertheless, OTCL images suffer from aliasing artifacts due to the inherent incompleteness of projection data, negatively impacting flaw characterization, dimensional metrology, and failure analysis. To reveal the cause of aliasing artifacts, the three-dimensional frequency domain characteristics of OTCL are analyzed. We further propose a novel reconstruction algorithm to mitigate aliasing artifacts, termed self-prior information guidance and adaptive weight total variation constraint (SPIG-AwTV). The SPIG-AwTV comprises two components: a self-prior information guidance (SPIG) regularization term and an adaptive weighted total variation (AwTV) regularization term. Specifically, SPIG is derived from filtered backprojection reconstruction result via contour extraction and masking. The AwTV regularization term is tailored to the gradient features of OTCL images in different directions. Experimental results demonstrate that the SPIG-AwTV outperforms existing methods in suppressing aliasing artifacts, preserving edges, and achieving higher-quality OTCL images.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103169"},"PeriodicalIF":3.7,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-20DOI: 10.1016/j.displa.2025.103167
Han Zhang , Xiaojun Yu , Hengrong Guo , Liang Shen , Zeming Fan
{"title":"DFF-Mono:A lightweight self-supervised monocular depth estimation method based on dual-branch feature fusion","authors":"Han Zhang , Xiaojun Yu , Hengrong Guo , Liang Shen , Zeming Fan","doi":"10.1016/j.displa.2025.103167","DOIUrl":"10.1016/j.displa.2025.103167","url":null,"abstract":"<div><div>Monocular depth estimation is one of the fundamental challenges in 3D scene understanding, particularly when operating within the constraints of unsupervised learning paradigms. While existing self-supervised methods avoid the dependency on annotated depth labels, their high computational complexity significantly hinders deployment on resource-constrained mobile platforms. To address this issue, we propose a parameter-efficient framework, namely, DFF-Mono, that synergistically optimizes depth estimation accuracy with computational efficiency. Specifically, the proposed DFF-Mono framework incorporates three main components. While a lightweight encoder that integrates Dual-Kernel Dilated Convolution (DKDC) modules with Dual-branch Feature Fusion (DFF) architecture is proposed for multi-scale feature encoding, a novel Attention-guided Large Kernel Inception (ALKI) module with multi-branch large-kernel convolution is devised to leverage local–global attention guidance for efficient local feature extraction. As a complement, a frequency-domain optimization strategy is also employed to enhance training efficiency. The strategy is achieved via adaptive Gaussian low-pass filtering, without introducing any additional network parameters. Extensive experiments are conducted to verify the effectiveness of the proposed method, and results demonstrate that DFF-Mono is superior over those existing approaches across standard benchmarks. Notably, DFF-Mono reduces model parameters by 23% compared to current state-of-the-art solutions while consistently achieving superior depth accuracy.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103167"},"PeriodicalIF":3.7,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-20DOI: 10.1016/j.displa.2025.103166
Benjamin Beltzung , Marie Pelé , Lison Martinet , Elliot Maitre , Jimmy Falck , Cédric Sueur
{"title":"Using deep learning predictions to study the development of drawing behaviour in children","authors":"Benjamin Beltzung , Marie Pelé , Lison Martinet , Elliot Maitre , Jimmy Falck , Cédric Sueur","doi":"10.1016/j.displa.2025.103166","DOIUrl":"10.1016/j.displa.2025.103166","url":null,"abstract":"<div><div>Drawing behaviour in children provides a unique window into their cognitive development. This study uses Convolutional Neural Networks (CNNs) to examine cognitive development in children’s drawing behaviour by analysing 386 drawings from 193 participants, comprising 150 children aged 2–10 years and 43 adults from France. CNN models, enhanced by Bayesian optimization, were trained to categorize drawings into ten age groups and to compare children’s drawings with adults’ ones. Results showed that model accuracy increases with the child’s age, reflecting improvement in drawing skills. Techniques like Grad-CAM and Captum offered insights into key features recognized by CNNs, illustrating the potential of deep learning in evaluating developmental milestones, with significant implications for educational psychology and developmental diagnostics.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103166"},"PeriodicalIF":3.7,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-19DOI: 10.1016/j.displa.2025.103128
Yunxin Ye, Feng Shao, Hangwei Chen, Xiongli Chai, Xiaolong Tang
{"title":"Multi-view stereo with cross-scale feature fusion strategy and hybrid depth estimation","authors":"Yunxin Ye, Feng Shao, Hangwei Chen, Xiongli Chai, Xiaolong Tang","doi":"10.1016/j.displa.2025.103128","DOIUrl":"10.1016/j.displa.2025.103128","url":null,"abstract":"<div><div>In multi-view stereo (MVS) 3D reconstruction, existing methods often face challenges such as insufficient feature representation in weakly textured areas, assumptions of equal view contributions, and limited depth estimation accuracy, leading to incomplete reconstruction results. To address these issues, we propose a multi-view stereo method integrating a cross-scale feature fusion strategy and hybrid depth estimation (CH-MVSNet), aimed at improving the precision and completeness of MVS reconstruction. Our approach introduces a multi-scale feature enhancement module (MFEM), which combines channel attention mechanisms with multi-scale feature fusion to enhance features from source and reference images, improving intra-image contextual information and inter-image feature relationships. We then propose a weighted view cost volume module (WVCM), which calculates weighted view correlations to construct a more precise cost volume, further improving reconstruction accuracy. Finally, we incorporate an RGB-guided hybrid depth estimation module (RHDE), which combines classification and regression methods for depth estimation, utilizing RGB information from reference images to optimize the depth map precision. Through rigorous testing on the DTU dataset and Tanks and Temples benchmark, our method demonstrates significant improvements in reconstruction accuracy and completeness.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103128"},"PeriodicalIF":3.7,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-16DOI: 10.1016/j.displa.2025.103162
Li-Te Yin , Guan-Cheng Lin , Pei-Chi Su , Ching-Yung Chen
{"title":"The effect of VR visual training device on accommodative lag in myopic adolescents","authors":"Li-Te Yin , Guan-Cheng Lin , Pei-Chi Su , Ching-Yung Chen","doi":"10.1016/j.displa.2025.103162","DOIUrl":"10.1016/j.displa.2025.103162","url":null,"abstract":"<div><h3>Purpose</h3><div>To investigate whether virtual reality (VR) visual training device can effectively reduce accommodative lag.</div></div><div><h3>Methods</h3><div>This study recruited 20 myopic adolescents (mean age 16.45 ± 0.95 years) and randomly assigned them to either a treatment group (n = 10) or a control group (n = 10). The treatment group underwent 8 weeks of VR visual training, while the control group received no training. Refractive error, accommodative lag, facility, and amplitude were measured before and after the 8-week period. Independent sample t-tests, repeated measures ANOVA, and paired sample t-tests were used to analyzing data.</div></div><div><h3>Results</h3><div>Baseline comparisons showed no significant differences between the treatment and control groups in accommodative lag, facility, amplitude, and refractive error (p > 0.05). Repeated measures ANOVA revealed significant interaction effects between time and group for all three accommodative functions (p ≤ 0.05). Further within-group analysis indicated that the treatment group exhibited significant improvements in accommodative functions after training compared to baseline (p ≤ 0.05), whereas no significant changes were observed in the control group (p > 0.05). In addition, no significant changes in refractive error were observed in either group over the 8-week period (p > 0.05).</div></div><div><h3>Conclusion</h3><div>The VR visual training device effectively reduces accommodative lag and improves accommodative function in myopic adolescents. However, whether it can be applied to myopia control remains unclear and requires further investigation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103162"},"PeriodicalIF":3.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144665769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-16DOI: 10.1016/j.displa.2025.103164
Chongzhe Yan , Feng Liu , Ying Cao , Huijuan Tu , Zi Xu , Wuchao Li , Pinhao Li , Zhiyang Xing , Yi Chen , Zhi-Cheng Li , Yuanshen Zhao , Bo Gao , Rongpin Wang
{"title":"PcPreT-Net: Predicting classification of decline rate in prostate-specific antigen using graph neural network","authors":"Chongzhe Yan , Feng Liu , Ying Cao , Huijuan Tu , Zi Xu , Wuchao Li , Pinhao Li , Zhiyang Xing , Yi Chen , Zhi-Cheng Li , Yuanshen Zhao , Bo Gao , Rongpin Wang","doi":"10.1016/j.displa.2025.103164","DOIUrl":"10.1016/j.displa.2025.103164","url":null,"abstract":"<div><div>Prostate cancer (PCa) is one of the most common cause of cancer-related deaths among men worldwide, with prostate-specific antigen (PSA) serving as a widely accepted biomarker for the diagnosis, treatment monitoring, and prognosis of PCa. Accurate assessment of PSA dynamics is therefore essential for evaluating therapeutic efficacy and disease progression. Magnetic resonance imaging (MRI) is widely recognized for its accuracy and non-invasive nature in managing PCa, plays a key role in PCa management. We aim to establish a predictive association between MRI data and PSA decline to enable individualized treatment assessment. This study proposes a hybrid classification model combing convolutional neural network (CNN) and graph convolutional network (GCN) to predict PSA decline rate. The graph nodes are constructed from multiparametric MRI (mp-MRI) images with highlighting tumor regions. The CNN, pretrained to classify Gleason score risk levels, serves as an image feature extractor that extracts semantic features and encodes inter-node relationships. Based on these features, a mapping relationship between mp-MRI and PSA decline rate categories was then developed. Ablation experiments validated the effectiveness of the designed feature extraction framework. Comparative tests showed that our model outperformed traditional radiomics, CNN, and vision transformer (ViT) models, achieving an accuracy of 0.870, precision of 0.881, recall of 0.858, and F1-score of 0.872.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103164"},"PeriodicalIF":3.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-15DOI: 10.1016/j.displa.2025.103159
Shuo Zhang, Jiangnan Li, Li Sun, Jiantao Wu, Manpo Li
{"title":"Evaluation method of design elements for scoliosis orthoses based on three-dimensional perspective tracking data","authors":"Shuo Zhang, Jiangnan Li, Li Sun, Jiantao Wu, Manpo Li","doi":"10.1016/j.displa.2025.103159","DOIUrl":"10.1016/j.displa.2025.103159","url":null,"abstract":"<div><div>Product kansei image evaluation is crucial for optimizing product design elements. Developing methods for evaluating kansei image will improve user satisfaction. Traditional methods of assessing kansei image primarily use two-dimensional images, limited by their single perspective and one-sided interaction. To overcome these limitations, this paper proposes a product kansei image evaluation method incorporating three-dimensional perspective tracking data. This study used scoliosis orthosis as an experimental sample and focused on the kansei image word “hidden.” Through kansei image evaluation, we identified the product design elements that most influence the perception of “hidden”, optimizing the design to increase the product’s hiddenness. First, we created an interactive three-dimensional simulation space and placed a three-dimensional model of a scoliosis orthosis inside it. Through this setup, participants could interact in three dimensions with the product, enabling the collection of three-dimensional perspective data. Meanwhile, we divided the scoliosis orthosis into Areas of Interest (AOI) based on the product’s functional regions and acquired eye-tracking data as participants interacted with the product model in the three-dimensional simulation space. The eye-tracking and three-dimensional perspective data were matched with the AOI regions. Finally, we weighted the eye-tracking data with the three-dimensional perspective tracking data to calculate the weighted value of each design element of the scoliosis orthosis associated with the kansei image word “hidden,” thus optimizing the design based on their priority. The results demonstrate that evaluating kansei images with three-dimensional perspective tracking data is more accurate than with eye tracking alone. Further analysis reveals that: (1) Constructing a three-dimensional simulation space to display the product model and enable human–computer interaction provides more accurate experimental data than traditional single-perspective two-dimensional images, better reflecting real user-product interactions. (2) Regarding the weight calculation of design elements, three-dimensional perspective tracking data is incorporated based on eye-tracking data, and a weighted method is used for the calculation. This includes visual data obtained by observing the product from different fixation points, providing more information and improving the results’ authenticity. (3) Eye-tracking data combined with three-dimensional perspective tracking enables designers to make quick decisions on design elements that need to be optimised. Consequently, designers can adjust scoliosis orthoses promptly, improving patient compliance and orthopedic effectiveness. In this study, we propose a new quantitative method for evaluating kansei images in products and provide new insights into the perceptual design of scoliosis orthoses.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103159"},"PeriodicalIF":3.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-15DOI: 10.1016/j.displa.2025.103155
Haolin Li, Zheng Zhou, Xiaoyan Liu
{"title":"Investigation of the flicker of AMOLED pixel by trap-induced LTPS-TFT current fluctuation model","authors":"Haolin Li, Zheng Zhou, Xiaoyan Liu","doi":"10.1016/j.displa.2025.103155","DOIUrl":"10.1016/j.displa.2025.103155","url":null,"abstract":"<div><div>First frame drop (FFD), low-frequency flicker and various refresh rate (VRR) flicker of 7T1C active-matrix organic light emitting diode (AMOLED) are simulated in real time. By modeling the time-dependent trap capture/emission behavior, hysteresis and the current fluctuation of low-temperature polysilicon thin film transistors (LTPS-TFTs) are simulated. Then the proposed model is applied to the simulation of 7T1C AMOLED pixel. Three forms of flickers are simulated by the proposed trap-induced current fluctuation model and its dependence on frequency and trap properties are also evaluated. Our work provides a physical insight for the circuit transient analysis and a guideline for AMOLED pixel design regarding the reliability issue.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103155"},"PeriodicalIF":3.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-07-15DOI: 10.1016/j.displa.2025.103151
Mi Zhou , Hao Zhang , Mu Ku Chen , Zihan Geng
{"title":"Implicit feature compression for efficient cloud–edge holographic display","authors":"Mi Zhou , Hao Zhang , Mu Ku Chen , Zihan Geng","doi":"10.1016/j.displa.2025.103151","DOIUrl":"10.1016/j.displa.2025.103151","url":null,"abstract":"<div><div>Holographic displays, with their ability to vividly reconstruct object wavefronts, stand as promising candidates for future immersive display technologies. However, delivering such immersive experiences demands large volumes of holographic data. Compressing holographic data with high compression ratios remains challenging due to the substantial high-frequency content in holograms. To overcome this challenge, we propose an implicit feature compression-based cloud–edge system for efficient holographic display. The distinctive aspect of our approach lies in compressing the implicit features learned during hologram generation into an encoded stream, rather than compressing the hologram itself. This methodology integrates a joint design of a cloud-side encoder and edge-side decoder, with both components performing mixed hologram generation and data compression/decompression. Our results on 1,000 augmented DIV2K test images demonstrate that our approach remarkably reduces the original data volume by 99.8% on average, and the experiments validate our approach. This research establishes a technological foundation for the large-scale commercialization of holographic displays.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103151"},"PeriodicalIF":3.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}