{"title":"Study on interface fatigue failure of flexible OLED display modules based on cyclic cohesive zone model","authors":"Liting Huang, Dunming Liao, Wenjing Peng, Ying Zhou, Keyu Chen","doi":"10.1016/j.displa.2024.102949","DOIUrl":"10.1016/j.displa.2024.102949","url":null,"abstract":"<div><div>Organic light-emitting diode (OLED) display devices are widely used in the field of electronic display because of their excellent performance, but flexible OLED displays are prone to device damage and peeling failure after multiple bendings. The cohesive zone model (CZM) has great advantages in dealing with interface peeling. In this paper, a cyclic cohesive constitutive model was developed to characterize the peeling phenomenon between the interface of optical clear adhesive (OCA) and cover window under constant cyclic loading in flexible display modules. The model was implemented in ABAQUS by user subroutine to redefine field variables at a material point (USDFLD). Through the bending fatigue peeling experiment and finite element simulation between the cover window and OCA, the cyclic cohesive constitutive parameters were obtained by the parameter inversion method, and the amplification method of fatigue damage variables was optimized. Finally, a flexible OLED simulation model with a multilayer stack structure was established. Compared with the experimental results, the simulation model successfully predicted the peeling failure position, and the relative error between the predicted life value and the experimental value was 16.6%. The applicability of the cyclic cohesive zone model in characterizing the fatigue peeling properties of the interface between the cover window and OCA was validated.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102949"},"PeriodicalIF":3.7,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-26DOI: 10.1016/j.displa.2024.102941
Md. Shamim Hossain , Shamima Aktar , Mohammad Alamgir Hossain , Naijie Gu , Zhangjin Huang
{"title":"CM-SC: Cross-modal spatial-channel attention network for image captioning","authors":"Md. Shamim Hossain , Shamima Aktar , Mohammad Alamgir Hossain , Naijie Gu , Zhangjin Huang","doi":"10.1016/j.displa.2024.102941","DOIUrl":"10.1016/j.displa.2024.102941","url":null,"abstract":"<div><div>In multi-modal reasoning tasks, modern models often encounter difficulties with capturing higher-order interactions and maintaining computational efficiency, particularly when processing long visual sequences. Conventional attention-based models for image captioning generally focus on first-order interactions, limiting their ability to capture intricate relationships between objects within an image. This constraint can hinder the generation of high-quality captions, as nuanced understanding and description of visual content require recognizing these deeper interconnections. In this work, we address these challenges by introducing a novel attention mechanism which is an innovative model tailored for cross-modal interactions that selectively utilizes both visual and textual information to generate fine-grained captions. Our attention mechanism, termed Cross-Modal Spatial-Channel (CM-SC), incorporates a flexible variant of cross-variance that effectively captures higher-order interactions within spatial and channel-wise attention distributions across different modalities. By stacking multiple CM-SC attention blocks, our approach facilitates second-order to potentially infinite-order feature interactions without requiring additional parameters. The attention module integrates seamlessly with both LSTM and Transformer frameworks. Notably, our approach reduces average computation time by 20.74% compared to the baseline model and improves performance metrics. Extensive experiments were conducted to validate the proposed framework on the Flickr30K and MSCOCO datasets. The results show that our approach performs competitively against many contemporary standard methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102941"},"PeriodicalIF":3.7,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-25DOI: 10.1016/j.displa.2024.102944
Dongning Yan , Jing Zhao , Yiming Ma , Han Ma
{"title":"The influence of neuroticism personality trait on user interaction with game-based hand rehabilitation training","authors":"Dongning Yan , Jing Zhao , Yiming Ma , Han Ma","doi":"10.1016/j.displa.2024.102944","DOIUrl":"10.1016/j.displa.2024.102944","url":null,"abstract":"<div><div>Gamification has emerged as a method in rehabilitation training to increase users’ adherence, subjective perception, and physical fitness. Neuroticism personality trait can take an active role in affecting people’s behavior during the rehabilitation process. However, little research has been conducted to study <em>whether</em> and <em>how</em> neuroticism may influence users’ interaction with the game-based rehabilitation training. In this manuscript, we conduct a laboratory-controlled experiment that not only measured the influence of neuroticism on users’ perception with the game-based hand rehabilitation training but also employed functional near-infrared spectroscopy (fNIRS) to in-depth reveal how neuroticism may influence users’ brain functions. The results show that, for people with high neuroticism, the game-based hand rehabilitation training can significantly improve their subjective perceptions and brain functions. The findings could be useful for developing a more effective hand rehabilitation training based on user personality.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102944"},"PeriodicalIF":3.7,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-25DOI: 10.1016/j.displa.2024.102947
Kaifeng Liu , Jie Li , Pengbo Su , Da Tao
{"title":"Effects of emotional design and age on learning performance of self-care instructions for diabetes: Evidence from eye-tracking and heart rate variability","authors":"Kaifeng Liu , Jie Li , Pengbo Su , Da Tao","doi":"10.1016/j.displa.2024.102947","DOIUrl":"10.1016/j.displa.2024.102947","url":null,"abstract":"<div><h3>Objective</h3><div>The study aimed to examine the effects of emotional design and age on individuals’ learning performance of diabetes self-care instruction.</div></div><div><h3>Materials and Methods</h3><div>A two-factor (3 × 2) between-subjects design was employed, with emotional design and age as independent variables. Participants (30 young adults and 30 middle-aged and older adults) were required to learn with a series of text-illustration combined instructions in various design formats (i.e., neutral, black-and-white anthropomorphic, and colored anthropomorphic designs). Participants’ learning time, comprehension test score, eye-movement measures, heart rate variability (HRV) measures, and subjective perceptions were measured and analyzed.</div></div><div><h3>Results</h3><div>The three design formats achieved comparable task performance; however, the black-and-white anthropomorphic design resulted in a longer first visit duration. Middle-aged and older adults took longer time to learn the instructions and yielded lower comprehension test score. They also yielded longer total fixation duration, smaller pupil size, larger first visit duration, fewer visits, lower SD2/SD1, and higher perceived cognitive load than younger adults. Eye movement behaviors were also found to differ between the illustration and text portions.</div></div><div><h3>Conclusions</h3><div>Emotional design in instruction design did not facilitate nor hamper learning. Further studies need to determine the optimal level of emotional design. Middle-aged and older adults may experience difficulties when learning self-care instructions. Moreover, it is feasible to implement eye-tracking and HRV techniques when evaluating interface design. Human factors experiment need to be conducted to examine how emotional design manipulations affect individuals’ comprehension to ensure the learning materials can be truly effective when applied in practice.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102947"},"PeriodicalIF":3.7,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-24DOI: 10.1016/j.displa.2024.102936
Zhenbo Yu, Qiaoqiao Jin, Hang Wang, Bingbing Ni, Wenjun Zhang
{"title":"HybriDeformer: A hybrid deformation method for arbitrary 3D avatar controlling","authors":"Zhenbo Yu, Qiaoqiao Jin, Hang Wang, Bingbing Ni, Wenjun Zhang","doi":"10.1016/j.displa.2024.102936","DOIUrl":"10.1016/j.displa.2024.102936","url":null,"abstract":"<div><div>In this paper, we address the task of 3D avatar control, focusing on adjusting the shapes of characters. Most existing approaches generally fall into two different paradigms. Parameter deformation methods control shapes using several parameters, resulting in fast and accurate shape deformations, but they are not suitable for arbitrary 3D avatars. In contrast, non-parameter deformation methods can be applied to any 3D avatar but require substantial labor costs or supervision. To this end, we propose a hybrid deformation method termed <strong>HybriDeformer</strong>, which combines the strengths of both paradigms. The HybriDeformer includes a disentangled parameter deformer (<strong>DP</strong>) and a non-parameter deformer (<strong>NP</strong>). Specifically, the DP deformer allows users to freely deform arbitrary 3D avatars, while the NP deformer employs an optimized deformation strategy to drive each part of the 3D avatar to the desired shape, preserving smoothness as much as possible. Additionally, we present a new avatar dataset named <strong>ASTD</strong> for arbitrary 3D controlling, which can also be used for 3D avatar style transfer. Extensive experimental results on the arbitrary 3D avatar controlling task demonstrate that our method can be directly used by layman users to achieve high-quality deformation results. Our code and project are available at <span><span>https://sites.google.com/view/hybrideformer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102936"},"PeriodicalIF":3.7,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-24DOI: 10.1016/j.displa.2024.102945
Sihan Zhao , Chunmeng Li , Chenyang Zhang , Xiaozhong Yang
{"title":"An image inpainting model based on channel attention gated convolution and multi-level attention mechanism","authors":"Sihan Zhao , Chunmeng Li , Chenyang Zhang , Xiaozhong Yang","doi":"10.1016/j.displa.2024.102945","DOIUrl":"10.1016/j.displa.2024.102945","url":null,"abstract":"<div><div>Deep learning-based image inpainting models can effectively improve the clarity and restoration of damaged images. Most inpainting models fail to fully leverage deep semantic information, which leads to structural inconsistencies or missing details, especially when dealing with images containing large regions of missing information. This paper introduces an image inpainting model that integrates channel attention gated convolution (CAGC) and multi-level attention mechanism (MAM), referred to as CAGC-MAM model. The model is based on a generative adversarial network (GAN) and employs a two-stage coarse–fine network structure. Through the CAGC module, it effectively captures global features, mitigating the limitations of standard convolutional receptive fields. The MAM module combines spatial and channel attention to precisely extract critical information, ensuring the accuracy of restored details. Moreover, a joint loss function is applied to enhance detail consistency further. Qualitative and quantitative analyses conducted on CelebA-HQ and Paris Street View datasets, indicate that CAGC-MAM model excels in image clarity, detail enhancement, and overall consistency, showing superior performance in both computational time and inpainting effectiveness compared to existing classic models.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102945"},"PeriodicalIF":3.7,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-19DOI: 10.1016/j.displa.2024.102938
Walter K. Yego, Stuart J. Gilson, Rigmor C. Baraas, Ellen Svarverud
{"title":"Performing a task in an augmented reality head-mounted display can change accommodation responses","authors":"Walter K. Yego, Stuart J. Gilson, Rigmor C. Baraas, Ellen Svarverud","doi":"10.1016/j.displa.2024.102938","DOIUrl":"10.1016/j.displa.2024.102938","url":null,"abstract":"<div><div>The recent increase in use of augmented reality (AR) head-mounted displays (HMDs) has been accompanied by concerns of their potential effects on the oculomotor system, due to the vergence-accommodation conflict. Studies have reported symptoms of visual discomfort from performing visually and cognitively demanding procedural tasks in AR, but the extent to which AR affects vergence and accommodation responses is not known. Here, the aim was to investigate the effect on vergence and accommodation responses from performing a visually and cognitively demanding 3D task in an AR-HMD. Thirty-five young adults manipulated virtual objects at around 40 cm (near) using their hands to match a configuration presented on a physical 2D display at 4 m (distance). Before and after performing the task, simultaneous vergence and accommodation responses were measured. Accommodation but not vergence responses were affected after performing the task in AR.<!--> <!-->These findings suggest that using AR-HMDs with a fixed focal plane for visually and cognitively demanding tasks might give rise to short-term visuo-oculomotor changes.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102938"},"PeriodicalIF":3.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-18DOI: 10.1016/j.displa.2024.102932
Zhiyuan Xu, Chuan Lin, Hao Yan, Ningning Guo
{"title":"Fusing global–local feature bank for single image super-resolution","authors":"Zhiyuan Xu, Chuan Lin, Hao Yan, Ningning Guo","doi":"10.1016/j.displa.2024.102932","DOIUrl":"10.1016/j.displa.2024.102932","url":null,"abstract":"<div><div>Previous work has shown that Transformer-based methods, which have achieved remarkable success in natural language processing (NLP), have also made significant strides in image super-resolution (e.g., SwinIR). However, these methods primarily focus on dynamically establishing long-range relationships between pixels, emphasize the reconstruction of image edges and overall structure. And they tend to overlook local texture details, making it challenging to achieve more detailed images. In order to obtain more texture information for better reconstruction, the global–local feature bank fusion network (GLFBFNet) is presented. It is a simple but effective method that attends to local contextual information while modeling long-range dependencies, and establishes a feature bank to store the extracted features, enabling the comprehensive and complete information to participate in super-resolution image reconstruction. The core components of GLFBFNet are the dual branch block (DBB) and the global–local feature bank (GLFB). The dual branch block (DBB) strikes a balance between global and local modeling, facilitating their collaborative involvement in super-resolution reconstruction. The global–local feature bank (GLFB), despite its simple structure, prevents the loss of crucial information, thereby obtaining richer information to participate in reconstruction. These two core components are straightforward to implement and can be easily applied to existing Transformer-based methods. Experimental results demonstrate that our GLFBFNet achieves PSNR scores of 33.89 dB and 39.74 dB on the Urban100 and Manga109 datasets, respectively, surpassing SwinIR by 0.49 dB and 0.14 dB respectively.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102932"},"PeriodicalIF":3.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-16DOI: 10.1016/j.displa.2024.102939
Xinru Tian , Yunfeng Xie , Xiaoteng Tang
{"title":"Effect of color palette and text encoding on infrared thermal imaging perception with various screen resolution accuracy","authors":"Xinru Tian , Yunfeng Xie , Xiaoteng Tang","doi":"10.1016/j.displa.2024.102939","DOIUrl":"10.1016/j.displa.2024.102939","url":null,"abstract":"<div><div>Thermal imaging cameras are predominantly employed for temperature detection in a wide range of production and life scenes. However, the efficacy of their temperature sensing efficiency is frequently influenced by interface display mode and device display accuracy, subsequently impacting the efficiency of equipment temperature measurement and identification efficiency. In this study, a comprehensive analysis of color modes and screen resolution in infrared thermal imagers was conducted to ascertain the optimal color mode for different resolutions. To thoroughly investigate the key factors influencing the perception of the display interface in thermal imaging cameras, two sets of experiments were conducted, with a specific focus on factors associated with object contour perception and text recognition. The research findings underscore that color patterns effect object perceptual recognition, and rainbow or iron red color palette modes exhibit enhanced efficiency at lower resolutions (120ppi, 256ppi), whereas the grey palette mode demonstrates superior performance at higher resolutions. Additionally, it was observed that positive polarity augments text recognition, but the position of the character is independent of efficiency. This article provides invaluable guidance and recommendations for color and text coding design of infrared interface display with different display precision.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102939"},"PeriodicalIF":3.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-12-15DOI: 10.1016/j.displa.2024.102929
Haili Ye , Yancheng Mo , Chen Tang , Mingqian Liao , Xiaoqing Zhang , Limeng Dai , Baihua Li , Jiang Liu
{"title":"Graph confidence intercalibration network for intracranial aneurysm lesion instance segmentation in DSA","authors":"Haili Ye , Yancheng Mo , Chen Tang , Mingqian Liao , Xiaoqing Zhang , Limeng Dai , Baihua Li , Jiang Liu","doi":"10.1016/j.displa.2024.102929","DOIUrl":"10.1016/j.displa.2024.102929","url":null,"abstract":"<div><div>Intracranial aneurysm (IA) lesion segmentation is significant for its treatment and prognosis. Although exiting deep network-based instance methods have good IA lesion segmentation results based on digital subtraction angiography (DSA) images, they still face great challenges with instance confidence bias and imprecise boundary segmentation, which may negatively affect IA diagnosis. To tackle these problems, this paper proposes a novel graph confidence intercalibration network (GCINet) to automatically segment IA lesions from DSA images. To be specific, we design a graph confidence intercalibration (GCI) module to mitigate instance confidence bias by dynamically adjusting their confidence distributions. At the same time, we propose an edge space perception (ESP) module to correct ambiguous segmentation boundaries. Extensive experiments on a clinical IA-DSA and a publicly available LiTS dataset demonstrate that our GCINet outperforms state-of-the-art methods. Additionally, visual analysis and ablation studies are provided to verify the effectiveness of each module in GCINet.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102929"},"PeriodicalIF":3.7,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143163307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}