DisplaysPub Date : 2024-06-15DOI: 10.1016/j.displa.2024.102777
Da Wang , Dewen Cheng , Cheng Yao , Qiwei Wang
{"title":"Field curvature analysis and optimization method for near-eye display systems","authors":"Da Wang , Dewen Cheng , Cheng Yao , Qiwei Wang","doi":"10.1016/j.displa.2024.102777","DOIUrl":"10.1016/j.displa.2024.102777","url":null,"abstract":"<div><p>Near-eye display (NED) systems with a large field of view (FOV) and a large pupil are often accompanied by field curvature. In such cases, there is often a significant deviation between the virtual image seen by the human eye and an ideal plane. Currently, there is a lack of precise methods for describing and controlling the shape of virtual images in visual space. In this paper, the system is modeled, and the curvature is controlled through optimization. Under limited conditions, the system’s field curvature is controlled to approach an ideal state. When the system’s field curvature cannot be completely corrected, a method for describing the virtual image surface is introduced. This method helps optical designers effectively predict the degree of curvature and the distance of the virtual image. The image quality of the system with a curved image is evaluated and optimized based on the focusing function of the human eye. Additionally, the field curvature and pupil swim are analyzed jointly. The effectiveness of the method is verified by designing two different types of NED systems.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102777"},"PeriodicalIF":3.7,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141396367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-15DOI: 10.1016/j.displa.2024.102780
Blanca Larraga-García , Verónica Ruiz Bejerano , Xabier Oregui , Javier Rubio-Bolívar , Manuel Quintana-Díaz , Álvaro Gutiérrez
{"title":"Physiological and performance metrics during a cardiopulmonary real-time feedback simulation to estimate cognitive load","authors":"Blanca Larraga-García , Verónica Ruiz Bejerano , Xabier Oregui , Javier Rubio-Bolívar , Manuel Quintana-Díaz , Álvaro Gutiérrez","doi":"10.1016/j.displa.2024.102780","DOIUrl":"10.1016/j.displa.2024.102780","url":null,"abstract":"<div><p>Multitasking is crucial for First Responders (FRs) in emergency scenarios, enabling them to prioritize and treat victims efficiently. However, research on multitasking and its impact on rescue operations are limited. This study explores the relationship between multitasking, working memory, and the performance of chest compressions during cardiopulmonary resuscitation (CPR). In this experiment, eighteen first-year residents participated in a CPR maneuver using a real-time feedback simulator to learn chest compressions. Different additional secondary tasks were developed and accomplished concurrently with the chest compressions. Heart rate, respiration rate, galvanic skin response, body temperature, eye gaze movements and chest compression performance data were collected. The findings of this study indicated that multitasking impacted chest compression quality for all secondary tasks, showing significance (p-value < 0.05) for the frequency of the chest compressions which worsened in all cases. Additionally, vital signs such as heart rate, respiration rate, and eye gaze speed were also affected during multitasking. Nevertheless, this change on vital signs was different depending on the type of secondary task accomplished. Therefore, as a conclusion, performing multiple tasks during chest compressions affects performance. Understanding cognitive load and its impact on vital signs can aid in training FRs to handle complex scenarios efficiently.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102780"},"PeriodicalIF":4.3,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001446/pdfft?md5=c5a42ab2f90b291bf7b17e4dc7e7121e&pid=1-s2.0-S0141938224001446-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141392364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-15DOI: 10.1016/j.displa.2024.102776
Yukun Song , Xin Mao , Xuxiang Feng , Changwei Wang , Rongtao Xu , Man Zhang , Shibiao Xu
{"title":"GRLN: Gait Refined Lateral Network for gait recognition","authors":"Yukun Song , Xin Mao , Xuxiang Feng , Changwei Wang , Rongtao Xu , Man Zhang , Shibiao Xu","doi":"10.1016/j.displa.2024.102776","DOIUrl":"10.1016/j.displa.2024.102776","url":null,"abstract":"<div><p>Gait recognition aims to identify individuals at a distance based on their biometric gait patterns. While offering flexibility in network input, existing set-based methods often overlook the potential of fine-grained local feature by solely utilizing global gait feature and fail to fully exploit the communication between silhouette-level and set-level features. To alleviate this issue, we propose Gait Refined Lateral Network(GRLN), featuring plug-and-play Adaptive Feature Refinement modules (AFR) that extract discriminative features progressively from silhouette-level and set-level representations in a coarse-to-fine manner at various network depths. AFR can be widely applied in set-based gait recognition models to substantially enhance their gait recognition performance. To align with the extracted refined features, we introduce Horizontal Stable Mapping (HSM), a novel mapping technique that reduces model parameters while improving experimental results. To demonstrate the effectiveness of our method, we evaluate GRLN on two gait datasets, achieving the highest recognition rate among all set-based methods. Specifically, GRLN demonstrates an average improvement of 1.15% over the state-of-the-art set-based method on CASIA-B. Especially in the coat-wearing condition, GRLN exhibits a 5% improvement in performance compared to the contrast method GLN.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102776"},"PeriodicalIF":3.7,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141397488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-13DOI: 10.1016/j.displa.2024.102774
Zongxi Han, Rong Xie
{"title":"Blind quality assessment of night-time photos: A region selective approach","authors":"Zongxi Han, Rong Xie","doi":"10.1016/j.displa.2024.102774","DOIUrl":"10.1016/j.displa.2024.102774","url":null,"abstract":"<div><p>Despite the emergence of low-light enhancement algorithms and the associated quality assessment metrics in literature, there are rare works considering the quality assessment of real night-time photos captured by mobile cameras. In this paper, we handle this task by first constructing a night-time photo database (NPHD), which consists of 510 photos captured by 30 mobile devices in 17 scenes. Their mean opinion scores are rated by 10 people using the anchor ruler method. Furthermore, we propose a region selective approach for the objective image quality assessment (RSIQA), based on which different feature sets are extracted. Specifically, the center and around regions are partitioned for the brightness, contrast, vignetting, saturation and shading. The brightest areas are located as the region where the highlight suppressing capability is qualified. Finally, we select the foreground and sharpest regions for the assessment of preserving details, naturalness, noises, and image structure. To map different/multiple quality attributes of the night-time photo into a single quality score, four regressors: support vector regression, decision tree, random forest or AdaBoost.R2 are chosen and compared. Experiments on NPHD demonstrate that the proposed RSIQA achieves superior result compared to 17 state-of-the-art, 4 types of quality metrics, including conventionally general-purpose, deep learning based, contrast oriented and night specific ones.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102774"},"PeriodicalIF":3.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141395412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-12DOI: 10.1016/j.displa.2024.102769
Yan Mao , Xuan Wang , Wu He , Gaofeng Pan
{"title":"Projection helps to improve visual impact: On a dark or foggy day","authors":"Yan Mao , Xuan Wang , Wu He , Gaofeng Pan","doi":"10.1016/j.displa.2024.102769","DOIUrl":"10.1016/j.displa.2024.102769","url":null,"abstract":"<div><p>Driving is a highly visually demanding activity. Different driving conditions have different effects on drivers, so to understand the effects of driving vision on drivers, this paper directly investigates the role of central and peripheral vision in different scenarios and tests whether projection training improves driving behavior. We use a VR device to selectively present information in the central and peripheral parts of the field of view to achieve these goals. In Experiment 1, we compare drivers’ performance with and without experience when driving through four different visual conditions under dark and foggy skies. Participants’ visual search behavior and driving behavior activities were recorded simultaneously. Experiment 2 determined whether training with a circular projection of three colors improved the driver’s behavior. The results showed that (1) central vision is critical to the driver, and the importance of peripheral vision can be directly measured using the VR device; (2) a clear middle and blurred peripheral vision not only improves driver behavior in foggy weather but also helps to improve attention and driving ability; and (3) the color projection training indicated that the green projection was more effective than the others and that it significantly improved (4) Novice drivers collected visual information mainly from their central vision and were less able to drive than veterans, but green projection improved their driving ability and reduced collisions. Most importantly, the study results provide a new visual training paradigm that can improve driver behavior on dark and foggy days, especially for female novices.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102769"},"PeriodicalIF":4.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141409839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-12DOI: 10.1016/j.displa.2024.102772
Congwei Liao , Xin Zheng , Shengdong Zhang
{"title":"Dual-bootstrapping gate driver circuit design using IGZO TFTs","authors":"Congwei Liao , Xin Zheng , Shengdong Zhang","doi":"10.1016/j.displa.2024.102772","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102772","url":null,"abstract":"<div><p>To promote the integration of thin-film transistor (TFT) gate driver circuit technology into high-resolution large-size display application with narrow bezel, achieving high speed is a critical challenge. This paper proposed a dual-bootstrapping TFT integrated gate driver circuit for large-size display. The over-drive voltage of the driving TFT was increased both at the rising and falling edges of the output waveforms. To validate the circuit feasibility, the proposed circuit was fabricated using amorphous indium-gallium-zinc-oxide (a-IGZO) TFT technology and measured in terms of transient response with cascaded stages and reliability tests over long operating time. Compared to conventional approaches, the proposed gate driver demonstrates a 39 % reduction in the falling time as well as compact layout. Therefore, the proposed gate driver schematic is well-suited for large-size display applications that involves heavy resistance–capacitance (RC) loadings and require high resolution above 8 K.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102772"},"PeriodicalIF":4.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-08DOI: 10.1016/j.displa.2024.102770
Bitao Ma , Jiajie Chen , Xiaoxiao Yan , Zhanzhan Cheng , Nengfeng Qian , Changyin Wu , Wendell Q. Sun
{"title":"Objectively assessing visual analogue scale of knee osteoarthritis pain using thermal imaging","authors":"Bitao Ma , Jiajie Chen , Xiaoxiao Yan , Zhanzhan Cheng , Nengfeng Qian , Changyin Wu , Wendell Q. Sun","doi":"10.1016/j.displa.2024.102770","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102770","url":null,"abstract":"<div><p>Knee osteoarthritis (KOA) is a common degenerative joint disorder that significantly deteriorates the quality of life for affected patients, primarily through the symptom of knee pain. In this study, we developed a machine learning methodology that integrates infrared thermographic technology with health data to objectively evaluate the Visual Analogue Scale (VAS) scores for knee pain in patients suffering from KOA. We preprocessed thermographic data from two healthcare centers by removing background noise and extracting Regions of Interest (ROI), which allowed us to capture image features. These were then merged with patient health data to build a comprehensive feature set. We employed various regression models to predict the VAS scores. The results indicate that the XGBoost model, using a 7:3 training-to-testing ratio, outperformed other models across several evaluation metrics. This study confirms the practicality and effectiveness of using thermographic imaging and machine learning for assessing knee pain, providing a new supportive tool for the management of pain in KOA and potentially increasing the objectivity of clinical assessments. The research is primarily focused on the middle-aged and elderly populations. In the future, we plan to extend the use of this technology to monitor risk factors in children’s knees, with the goal of improving their long-term quality of life and enhancing the overall well-being of the population.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102770"},"PeriodicalIF":4.3,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-08DOI: 10.1016/j.displa.2024.102771
Chengcheng Liu , Huikai Shao , Dexing Zhong
{"title":"PalmSecMatch: A data-centric template protection method for palmprint recognition","authors":"Chengcheng Liu , Huikai Shao , Dexing Zhong","doi":"10.1016/j.displa.2024.102771","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102771","url":null,"abstract":"<div><p>While existing palmprint recognition researches aim to improve accuracy in various situations, they often overlook the security implications. This paper delves into template protection in palmprint recognition. The existing template protection methods usually cannot strike a well balance between security, accuracy and usability, which reduces the applicability of the algorithms. In this work, a data-centric approach for palmprint template protection is proposed, called <em>PalmSecMatch</em>. Our solution extracts the key from plaintext data. It extremely reduces the dependency on third-party or independent key generation algorithms. The backbone of <em>PalmSecMatch</em> consists of key data extraction and encryption, order shuffling of the raw vectors, hashing code generation, shuffling basis and hashing code fading. <em>PalmSecMatch</em> subtly exploits the fact that biometric data are random variables and benefits from its data-centric nature. <em>PalmSecMatch</em> allows the same plaintext features to be encrypted into highly different ciphertexts, which greatly ensures security. At the same time, the application of data fading strategy makes it extremely difficult for an attacker to distinguish the user data from the auxiliary data. The security analysis shows that <em>PalmSecMatch</em> satisfies the requirements of ISO/IEC 24745. Adequate experiments on two public palmprint databases validate the effectiveness of the proposed method.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102771"},"PeriodicalIF":4.3,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-06DOI: 10.1016/j.displa.2024.102768
Xiaotian Zhang , Weiping He , Yunfei Qin , Mark Billinghurst , Jiepeng Dong , Daisong Liu , Jilong Bai , Zenglei Wang
{"title":"Comparison of visual and multisensory augmented reality for precise manual manipulation tasks","authors":"Xiaotian Zhang , Weiping He , Yunfei Qin , Mark Billinghurst , Jiepeng Dong , Daisong Liu , Jilong Bai , Zenglei Wang","doi":"10.1016/j.displa.2024.102768","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102768","url":null,"abstract":"<div><p>Precise manual manipulation is an important skill in daily life, and Augmented Reality (AR) is increasingly being used to support such operations. This article reports on a study investigating the usability of visual and multisensory AR for precise manual manipulation tasks, in particular the representation of detailed deviations from the target pose. Two AR instruction interfaces were developed: the visual deviation instruction and the multisensory deviation instruction. Both interfaces used visual cues to indicate the required directions for manipulation. The difference was that the visual deviation instruction used text and color mapping to represent deviations, whereas the multisensory deviation instruction used sonification and vibration to represent deviations. A user study was conducted with 16 participants to compare the two interfaces. The results found a significant difference only in speed, without significant differences in accuracy, perceived ease-of-use, workload, or custom user experience elements. Multisensory deviation cues can speed up precise manual manipulation compared to visual deviation cues, but inappropriate sonification and vibration strategies can negatively affect users’ subjective experience, offsetting the benefits of multisensory AR. Based on the results, several recommendations were provided for designing AR instruction interfaces to support precise manual manipulation.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102768"},"PeriodicalIF":4.3,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141314446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-06-04DOI: 10.1016/j.displa.2024.102767
Longsheng Wei , Siyuan Guo , Jiu Huang , Xuan Fan
{"title":"LDDG: Long-distance dependent and dual-stream guided feature fusion network for co-saliency object detection","authors":"Longsheng Wei , Siyuan Guo , Jiu Huang , Xuan Fan","doi":"10.1016/j.displa.2024.102767","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102767","url":null,"abstract":"<div><p>Complex image scenes are a challenge in the collaborative saliency object detection task in the field of saliency detection, such as the inability to accurately locate salient object, surrounding background information affecting object recognition, and the inability to fuse multi-layer collaborative features well. To solve these problems, we propose a long-range dependent and dual-stream guided feature fusion network. Firstly, we enhance saliency feature by the proposed coordinate attention module so that the network can learn a better feature representation. Secondly, we capture the long-range dependency information of image feature by the proposed non-local module, to obtain more comprehensive contextual complex information. At lastly, we propose a dual-stream guided network to fuse multiple layers of synergistic saliency features. The dual-stream guided network includes classification streams and mask streams, and the layers in the decoding network are guided to fuse the feature of each layer to output more accurate synoptic saliency prediction map. The experimental results show that our method is superior to the existing methods on three common datasets: CoSal2015, CoSOD3k, and CoCA.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102767"},"PeriodicalIF":4.3,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}