Displays最新文献

筛选
英文 中文
DefocusSR2: An efficient depth-guided and distillation-based framework for defocus images super-resolution DefocusSR2:基于深度引导和蒸馏的高效离焦图像超分辨率框架
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-15 DOI: 10.1016/j.displa.2024.102883
Qipei Li, Da Pan, Zefeng Ying, Qirong Liang, Ping Shi
{"title":"DefocusSR2: An efficient depth-guided and distillation-based framework for defocus images super-resolution","authors":"Qipei Li,&nbsp;Da Pan,&nbsp;Zefeng Ying,&nbsp;Qirong Liang,&nbsp;Ping Shi","doi":"10.1016/j.displa.2024.102883","DOIUrl":"10.1016/j.displa.2024.102883","url":null,"abstract":"<div><div>Existing image super-resolution (SR) methods often lead to oversharpening, particularly in defocused images. However, we have observed that defocused regions and focused regions present different levels of recovery difficulty. This observation opens up opportunities for more efficient enhancements. In this paper, we introduce DefocusSR2, an efficient framework designed for super-resolution of defocused images. DefocusSR2 consists of two main modules: Depth-Guided Segmentation (DGS) and Defocus-Aware Classify Enhance (DCE). In the DGS module, we utilize MobileSAM, guided by depth information, to accurately segment the input image and generate defocus maps. These maps provide detailed information about the locations of defocused areas. In the DCE module, we crop the defocus map and classify the segments into defocused and focused patches based on a predefined threshold. Through knowledge distillation and the fusion of blur kernel matching, the network retains the fuzzy kernel to reduce computational load. Practically, the defocused patches are fed into the Efficient Blur Match SR Network (EBM-SR), where the blur kernel is preserved to alleviate computational demands. The focused patches, on the other hand, are processed using more computationally intensive operations. Thus, DefocusSR2 integrates defocus classification and super-resolution within a unified framework. Experiments demonstrate that DefocusSR2 can accelerate most SR methods, reducing the FLOPs of SR models by approximately 70% while maintaining state-of-the-art SR performance.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102883"},"PeriodicalIF":3.7,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mambav3d: A mamba-based virtual 3D module stringing semantic information between layers of medical image slices Mambav3d:基于曼巴的虚拟三维模块,在各层医学图像切片之间串联语义信息
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-15 DOI: 10.1016/j.displa.2024.102890
Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei
{"title":"Mambav3d: A mamba-based virtual 3D module stringing semantic information between layers of medical image slices","authors":"Xiaoxiao Liu,&nbsp;Yan Zhao,&nbsp;Shigang Wang,&nbsp;Jian Wei","doi":"10.1016/j.displa.2024.102890","DOIUrl":"10.1016/j.displa.2024.102890","url":null,"abstract":"<div><div>High-precision medical image segmentation provides a reliable basis for clinical analysis and diagnosis. Researchers have developed various models to enhance the segmentation performance of medical images. Among these methods, two-dimensional models such as Unet exhibit a simple structure, low computational resource requirements, and strong local feature capture capabilities. However, their spatial information utilization is insufficient, limiting their segmentation accuracy. Three-dimensional models, such as 3D Unet, utilize spatial information more fully and are suitable for complex tasks, but they require high computational resources and have limited real-time performance. In this paper, we propose a virtual 3D module (Mambav3d) based on mamba, which introduces spatial information into 2D segmentation tasks to more fully integrate the 3D information of the image and further improve segmentation accuracy under conditions of low computational resource requirements. Mambav3d leverages the properties of hidden states in the state space model, combined with the shift of visual perspective, to incorporate semantic information between different anatomical planes in different slices of the same 3D sample. The voxel segmentation is converted to pixel segmentation to reduce model training data requirements and model complexity while ensuring that the model integrates 3D information and enhances segmentation accuracy. The model references the information from previous layers when labeling the current layer, thereby facilitating the transfer of semantic information between slice layers and avoiding the high computational cost associated with using structures such as Transformers between layers. We have implemented Mambav3d on Unet and evaluated its performance on the BraTs, Amos, and KiTs datasets, demonstrating superiority over other state-of-the-art methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102890"},"PeriodicalIF":3.7,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Luminance decomposition and Transformer based no-reference tone-mapped image quality assessment 基于亮度分解和变换器的无参考色调映射图像质量评估
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-14 DOI: 10.1016/j.displa.2024.102881
Zikang Chen , Zhouyan He , Ting Luo , Chongchong Jin , Yang Song
{"title":"Luminance decomposition and Transformer based no-reference tone-mapped image quality assessment","authors":"Zikang Chen ,&nbsp;Zhouyan He ,&nbsp;Ting Luo ,&nbsp;Chongchong Jin ,&nbsp;Yang Song","doi":"10.1016/j.displa.2024.102881","DOIUrl":"10.1016/j.displa.2024.102881","url":null,"abstract":"<div><div>Tone-Mapping Operators (TMOs) play a crucial role in converting High Dynamic Range (HDR) images into Tone-Mapped Images (TMIs) with standard dynamic range for optimal display on standard monitors. Nevertheless, TMIs generated by distinct TMOs may exhibit diverse visual artifacts, highlighting the significance of TMI Quality Assessment (TMIQA) methods in predicting perceptual quality and guiding advancements in TMOs. Inspired by luminance decomposition and Transformer, a new no-reference TMIQA method based on deep learning is proposed in this paper, named LDT-TMIQA. Specifically, a TMI will change under the influence of different TMOs, potentially resulting in either over-exposure or under-exposure, leading to structure distortion and changes in texture details. Therefore, we first decompose the luminance channel of a TMI into a base layer and a detail layer that capture structure information and texture information, respectively. Then, they are employed with the TMI collectively as inputs to the Feature Extraction Module (FEM) to enhance the availability of prior information on luminance, structure, and texture. Additionally, the FEM incorporates the Cross Attention Prior Module (CAPM) to model the interdependencies among the base layer, detail layer, and TMI while employing the Iterative Attention Prior Module (IAPM) to extract multi-scale and multi-level visual features. Finally, a Feature Selection Fusion Module (FSFM) is proposed to obtain final effective features for predicting the quality scores of TMIs by reducing the weight of unnecessary features and fusing the features of different levels with equal importance. Extensive experiments on the publicly available TMI benchmark database indicate that the proposed LDT-TMIQA reaches the state-of-the-art level.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102881"},"PeriodicalIF":3.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise subpixel luminance extraction method for De-Mura of AMOLED displays 用于 AMOLED 显示器去村效应的精确亚像素亮度提取方法
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-14 DOI: 10.1016/j.displa.2024.102889
Zhong Zheng, Zhaohua Zhou, Ruipeng Chen, Jiajie Liu, Chun Liu, Lirong Zhang, Lei Zhou, Miao Xu, Lei Wang, Weijing Wu, Junbiao Peng
{"title":"Precise subpixel luminance extraction method for De-Mura of AMOLED displays","authors":"Zhong Zheng,&nbsp;Zhaohua Zhou,&nbsp;Ruipeng Chen,&nbsp;Jiajie Liu,&nbsp;Chun Liu,&nbsp;Lirong Zhang,&nbsp;Lei Zhou,&nbsp;Miao Xu,&nbsp;Lei Wang,&nbsp;Weijing Wu,&nbsp;Junbiao Peng","doi":"10.1016/j.displa.2024.102889","DOIUrl":"10.1016/j.displa.2024.102889","url":null,"abstract":"<div><div>Currently, Mura defects have a significant impact on the yield of AMOLED panels, and De-Mura plays a critical role in the compensation. To enhance the applicability of the subpixel luminance extraction method in De-Mura and to address inaccuracies caused by aperture diffraction limit and geometric defocusing in camera imaging, this paper proposes a precise extraction method based on effective area. We establish the concept of the effective area first and then determine the effective area of subpixel imaging on the camera sensor by incorporating the circle of confusion (CoC) caused by aperture diffraction limits and geometric defocusing. Finally, more precise luminance information is obtained. Results show that, after compensation, the Mura on the white screen is almost eliminated subjectively. Objectively, by constructing normalized luminance curves for subpixels in Mura regions, the standard deviation indicates that our method outperforms the traditional whole-pixel method, improving uniformity by approximately 50%.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102889"},"PeriodicalIF":3.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Font and background color combinations influence recognition efficiency: A novel method via primary color Euclidean distance and response surface analysis 字体和背景颜色组合会影响识别效率:通过原色欧氏距离和响应面分析的新方法
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-12 DOI: 10.1016/j.displa.2024.102873
Wenchao Zhu , Zeliang Cheng , Qi Wang , Jing Du , Yingzi Lin
{"title":"Font and background color combinations influence recognition efficiency: A novel method via primary color Euclidean distance and response surface analysis","authors":"Wenchao Zhu ,&nbsp;Zeliang Cheng ,&nbsp;Qi Wang ,&nbsp;Jing Du ,&nbsp;Yingzi Lin","doi":"10.1016/j.displa.2024.102873","DOIUrl":"10.1016/j.displa.2024.102873","url":null,"abstract":"<div><div>The readability of human–computer interfaces impacts the users’ visual performance while using electronic devices, which gains inadequate attention. This situation is critical during high-stress conditions such as firefighting, where accurate and fast information processing is critical. This study addresses how font and background color combinations on Liquid Crystal displays (LCDs) affect recognition efficiency. A novel concept, primary color Euclidean distance (PCED), is introduced and testified under a repeated-measures experiment. Three factors were investigated: background color (black, white), font color (red, green, blue), and PCEDs. A total of 24 participants were recruited. Results demonstrate that color combinations with specific PCED values can substantially impact recognition efficiency. By using RSA, this study modelled the response time in a generalized mathematical model, which is response surface analysis. Results showed that blue font colors under a black background showed the longest response time. This study also explored the influence of physical stress on recognition efficiency, revealing a latency of about 100 ms across all color combinations. The findings offer a methodological advancement in understanding the effects of color combinations in digital displays, setting the stage for future research in diverse demographic and technological contexts, including mixed reality.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102873"},"PeriodicalIF":3.7,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLDBF: Global and local dual-branch fusion network for no-reference point cloud quality assessment GLDBF:用于无参照点云质量评估的全局和局部双分支融合网络
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-09 DOI: 10.1016/j.displa.2024.102882
Zhichao Chen , Shuyu Xiao , Yongfang Wang , Yihan Wang , Hongming Cai
{"title":"GLDBF: Global and local dual-branch fusion network for no-reference point cloud quality assessment","authors":"Zhichao Chen ,&nbsp;Shuyu Xiao ,&nbsp;Yongfang Wang ,&nbsp;Yihan Wang ,&nbsp;Hongming Cai","doi":"10.1016/j.displa.2024.102882","DOIUrl":"10.1016/j.displa.2024.102882","url":null,"abstract":"<div><div>No-reference Point Cloud Quality Assessment (NR-PCQA) is a challenge in the field of media quality assessment, such as inability to accurately capture quality-related features due to the unique scattered structure of points and less considering global features and local features jointly in the existing no-reference PCQA metrics. To address these challenges, we propose a Global and Local Dual-Branch Fusion (GLDBF) network for no-reference point cloud quality assessment. Firstly, sparse convolution is used to extract the global quality feature of distorted Point Clouds (PCs). Secondly, graph weighted PointNet++ is proposed to extract the multi-level local features of point cloud, and the offset attention mechanism is further used to enhance local effective features. Transformer-based fusion module is also proposed to fuse multi-level local features. Finally, we joint the global and local dual branch fusion modules via multilayer perceptron to predict the quality score of distorted PCs. Experimental results show that the proposed algorithm can achieves state-of-the-art performance compared with existing methods in assessing the quality of distorted PCs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102882"},"PeriodicalIF":3.7,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weighted ensemble deep learning approach for classification of gastrointestinal diseases in colonoscopy images aided by explainable AI 利用可解释人工智能辅助加权集合深度学习方法对结肠镜图像中的胃肠道疾病进行分类
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-06 DOI: 10.1016/j.displa.2024.102874
Faruk Enes Oğuz , Ahmet Alkan
{"title":"Weighted ensemble deep learning approach for classification of gastrointestinal diseases in colonoscopy images aided by explainable AI","authors":"Faruk Enes Oğuz ,&nbsp;Ahmet Alkan","doi":"10.1016/j.displa.2024.102874","DOIUrl":"10.1016/j.displa.2024.102874","url":null,"abstract":"<div><div>Gastrointestinal diseases are significant health issues worldwide, requiring early diagnosis due to their serious health implications. Therefore, detecting these diseases using artificial intelligence-based medical decision support systems through colonoscopy images plays a critical role in early diagnosis. In this study, a deep learning-based method is proposed for the classification of gastrointestinal diseases and colon anatomical landmarks using colonoscopy images. For this purpose, five different Convolutional Neural Network (CNN) models, namely Xception, ResNet-101, NASNet-Large, EfficientNet, and NASNet-Mobile, were trained. An ensemble model was created using class-based recall values derived from the validation performances of the top three models (Xception, ResNet-101, NASNet-Large). A user-friendly Graphical User Interface (GUI) was developed, allowing users to perform classification tasks and use Gradient-weighted Class Activation Mapping (Grad-CAM), an explainable AI tool, to visualize the regions from which the model derives information. Grad-CAM visualizations contribute to a better understanding of the model’s decision-making processes and play an important role in the application of explainable AI. In the study, eight labels, including anatomical markers such as z-line, pylorus, and cecum, as well as pathological findings like esophagitis, polyps, and ulcerative colitis, were classified using the KVASIR V2 dataset. The proposed ensemble model achieved a 94.125% accuracy on the KVASIR V2 dataset, demonstrating competitive performance compared to similar studies in the literature. Additionally, the precision and F1 score values ​​of this model are equal to 94.168% and 94.125%, respectively. These results suggest that the proposed method provides an effective solution for the diagnosis of GI diseases and can be beneficial for medical education.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102874"},"PeriodicalIF":3.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual reality in medical education: Effectiveness of Immersive Virtual Anatomy Laboratory (IVAL) compared to traditional learning approaches 医学教育中的虚拟现实技术:沉浸式虚拟解剖实验室(IVAL)与传统学习方法的效果比较
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-06 DOI: 10.1016/j.displa.2024.102870
Mohammed Kadri , Fatima-Ezzahra Boubakri , Timothy Teo , Fatima-Zahra Kaghat , Ahmed Azough , Khalid Alaoui Zidani
{"title":"Virtual reality in medical education: Effectiveness of Immersive Virtual Anatomy Laboratory (IVAL) compared to traditional learning approaches","authors":"Mohammed Kadri ,&nbsp;Fatima-Ezzahra Boubakri ,&nbsp;Timothy Teo ,&nbsp;Fatima-Zahra Kaghat ,&nbsp;Ahmed Azough ,&nbsp;Khalid Alaoui Zidani","doi":"10.1016/j.displa.2024.102870","DOIUrl":"10.1016/j.displa.2024.102870","url":null,"abstract":"<div><div>Immersive Virtual Anatomy Laboratory (IVAL) is an innovative learning tool that combines virtual reality and serious games elements to enhance anatomy education. This experimental study compares IVAL with traditional learning methods in terms of educational effectiveness and user acceptance. An experimental design was implemented with 120 undergraduate health-science students, randomly assigned to two groups: an experimental group using IVAL, and a control group following traditional learning methods. Data collection focused on quantitative measures such as pretest and posttest vocabulary assessment scores and task completion times, alongside qualitative measures obtained through a user experience questionnaire. This study utilizes the Technology Acceptance Model (TAM), incorporating variables such as Perceived Usefulness and Perceived Ease of Use. Results revealed significant improvements in the experimental group, with a 55.95% increase in vocabulary scores and an 18.75% reduction in task completion times compared to the control group. Qualitative data indicated that IVAL users reported greater Perceived Usefulness of the technology, improved Perceived Ease of Use, a more positive Attitude Towards Using IVAL, and stronger Behavioral Intention to continue using IVAL for anatomy learning. This study demonstrates that the integration of immersive virtual reality in the IVAL approach offers a promising method to enhance anatomy education. The findings provide insights into the effectiveness of immersive learning environments in improving learning outcomes and user acceptance. While further research is needed to explore long-term effects, this innovative approach not only enhances the effectiveness and enjoyment of anatomy learning but also provides valuable data on optimizing educational technology for improved learning outcomes.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102870"},"PeriodicalIF":3.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CIFTC-Net: Cross information fusion network with transformer and CNN for polyp segmentation CIFTC-Net:用于息肉分割的带有变压器和 CNN 的交叉信息融合网络
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-02 DOI: 10.1016/j.displa.2024.102872
Xinyu Li , Qiaohong Liu , Xuewei Li , Tiansheng Huang , Min Lin , Xiaoxiang Han , Weikun Zhang , Keyan Chen , Yuanjie Lin
{"title":"CIFTC-Net: Cross information fusion network with transformer and CNN for polyp segmentation","authors":"Xinyu Li ,&nbsp;Qiaohong Liu ,&nbsp;Xuewei Li ,&nbsp;Tiansheng Huang ,&nbsp;Min Lin ,&nbsp;Xiaoxiang Han ,&nbsp;Weikun Zhang ,&nbsp;Keyan Chen ,&nbsp;Yuanjie Lin","doi":"10.1016/j.displa.2024.102872","DOIUrl":"10.1016/j.displa.2024.102872","url":null,"abstract":"<div><div>Polyp segmentation plays a crucial role in the early diagnosis and treatment of colorectal cancer, which is the third most common cancer worldwide. Despite remarkable successes achieved by recent deep learning-related works, accurate segmentation of polyps remains challenging due to the diversity in their shapes, sizes, appearances, and other factors. To address these problems, a novel cross information fusion network with Transformer and convolutional neural network (CNN) for polyp segmentation, named CIFTC-Net, is proposed to improve the segmentation performance of colon polyps. In particular, a dual-branch encoder with Pyramid Vision Transformer (PVT) and ResNet50 is employed to take full advantage of both the global semantic information and local spatial features to enhance the feature representation ability. To effectively fuse the two types of features, a new global–local feature fusion (GLFF) module is designed. Additionally, in the PVT branch, a multi-scale feature integration (MSFI) module is introduced to fuse multi-scale features adaptively. At the bottom of the model, a multi-scale atrous pyramid bridging (MSAPB) module is proposed to achieve rich and robust multi-level features and improve the segmentation accuracy. Experimental results on four public polyp segmentation datasets demonstrate that CIFTC-Net surpasses current state-of-the-art methods across various metrics, showcasing its superiority in segmentation accuracy, generalization ability, and handling of complex images.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102872"},"PeriodicalIF":3.7,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From hardware to software integration: A comparative study of usability and safety in vehicle interaction modes 从硬件到软件集成:车辆交互模式的可用性和安全性比较研究
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-30 DOI: 10.1016/j.displa.2024.102869
Haibo Yin , Rui Li , Yingjie Victor Chen
{"title":"From hardware to software integration: A comparative study of usability and safety in vehicle interaction modes","authors":"Haibo Yin ,&nbsp;Rui Li ,&nbsp;Yingjie Victor Chen","doi":"10.1016/j.displa.2024.102869","DOIUrl":"10.1016/j.displa.2024.102869","url":null,"abstract":"<div><div>The increasing advancement of human–machine interaction (HMI) technology has brought the modes of vehicle HMI into focus, as they are closely related to driver and passenger safety and directly affect the travel experiences. This study compared the usability and safety of three vehicle HMI modes: hardware interaction (HI), hardware and software interaction (HSI), and software interaction (SI). The evaluation comprised two dimensions: usability and safety. Sixty participants’ performance on these tasks was evaluated at two driving speeds (30 km/h and 60 km/h). The results of the nonparametric tests indicated significant differences between the three interaction modes: (1) HI was the highest safety-oriented interaction mode with participants had the highest average vehicle speed and maximum acceleration measured at 60 km/h and the lowest glance frequency at both speeds; (2) HSI was the most usable interaction mode. Participants had the shortest task-completion time measured at 60 km/h and the highest score on the NASA-TLX and SUS scales taken for both speeds; (3) SI was the lowest secure and usable in-vehicle interaction mode. Participants had the longest task-completion time at 60 km/h, the highest error frequency under 30 and 60 km/h and the highest glance frequency, the longest total glance duration and the longest average glance time. In conclusion, HI and HSI were more secure and usable in-vehicle interaction modes than SI. From a theoretical exploration perspective, this paper elaborates on some exploratory thoughts and innovative ideas for practical application to the screen HMI mode selection and design in intelligent vehicle cabins.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102869"},"PeriodicalIF":3.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信