Displays最新文献

筛选
英文 中文
LLD-GAN: An end-to-end network for low-light image demosaicking LLD-GAN:用于低照度图像去马赛克的端到端网络
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-18 DOI: 10.1016/j.displa.2024.102856
Li Wang , Cong Shi , Shrinivas Pundlik , Xu Yang , Liyuan Liu , Gang Luo
{"title":"LLD-GAN: An end-to-end network for low-light image demosaicking","authors":"Li Wang ,&nbsp;Cong Shi ,&nbsp;Shrinivas Pundlik ,&nbsp;Xu Yang ,&nbsp;Liyuan Liu ,&nbsp;Gang Luo","doi":"10.1016/j.displa.2024.102856","DOIUrl":"10.1016/j.displa.2024.102856","url":null,"abstract":"<div><div>Demosaicking of low and ultra-low light images has wide applications in the fields of consumer electronics, security, and industrial machine vision. Denoising is a challenge in the demosaicking process. This study introduces a comprehensive end-to-end low-light demosaicking framework called LLD-GAN (Low Light Demosaicking Generative Adversarial Network), which greatly reduces the computational complexity. Our architecture employs a Wasserstein GAN framework enhanced by a gradient penalty mechanism. We have redesigned the generator based on the UNet++ network as well as its corresponding discriminator, which makes the model learning more efficient. In addition, we propose a new loss metric grounded in the principles of perceptual loss to obtain images with better visual quality. The contribution of Wasserstein GAN with gradient penalty and perceptual loss function was proved to be beneficial by our ablation experiments. For RGB images, we tested the proposed model under a wide range of low light levels, from 1/30 to 1/150 of normal light level, for 16-bit images with added noise. For actual low-light raw sensor images, the model was evaluated under three distinct lighting conditions: 1/100, 1/250, and 1/300 of normal exposure. The qualitative and quantitative comparison against advanced techniques demonstrates the validity and superiority of the LLD-GAN as a unified denoising-demosaicking tool.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102856"},"PeriodicalIF":3.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subjective and objective quality evaluation for industrial images 工业图像的主观和客观质量评估
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-18 DOI: 10.1016/j.displa.2024.102858
Chengxu Zhou , Yanlin Jiang , Hongyan Liu , Jingchao Cao , Ke Gu
{"title":"Subjective and objective quality evaluation for industrial images","authors":"Chengxu Zhou ,&nbsp;Yanlin Jiang ,&nbsp;Hongyan Liu ,&nbsp;Jingchao Cao ,&nbsp;Ke Gu","doi":"10.1016/j.displa.2024.102858","DOIUrl":"10.1016/j.displa.2024.102858","url":null,"abstract":"<div><div>Recently, the demand for ever-better image processing technique continues to grow in field of industrial scenario monitoring and industrial process inspection. The subjective and objective quality evaluation of industrial images are vital for advancing the development of industrial visual perception and enhancing the quality of industrial image/video processing applications. However, the scarcity of publicly available industrial image databases with reliable subjective scores restricts the development of industrial image quality evaluation (IIQE). In preparation for a vacancy, this article first establishes two industrial image databases (i.e., industrial scenario image dataset (ISID) and industrial process image dataset (IPID)) for assessing IIQE metrics. Furthermore, in order to avoid overwhelming industrial image nuances due to the wavelet subband summation, we then present a novel industrial application subband information fidelity standard (SIFS) evaluation method using the channel capacity of visual signals in wavelet domain. Specifically, we first build a visual signals channel model based on perception process from human eyes to brain. Second, we compute and compare the channel capacity for reference and distorted images to measure the information fidelity in each wavelet subband. Third, we sum over the subbands for information fidelity ratio to obtain the overall quality score. Finally, we fairly compare some up-to-date and our proposed image quality evaluation (IQE) methods in two novelty industrial datasets respectively. Our ISID and IPID datasets are capable of evaluating most IQE metrics comprehensively and paves the way for further research on IIQE. Our SIFS model show a remarkable performance comparing with other up-to-date IQE methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102858"},"PeriodicalIF":3.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater image enhancement with zero-point symmetry prior and reciprocal mapping 利用零点对称先验和倒易映射进行水下图像增强
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-17 DOI: 10.1016/j.displa.2024.102845
Fei Li , Chang Liu , Xiaomao Li
{"title":"Underwater image enhancement with zero-point symmetry prior and reciprocal mapping","authors":"Fei Li ,&nbsp;Chang Liu ,&nbsp;Xiaomao Li","doi":"10.1016/j.displa.2024.102845","DOIUrl":"10.1016/j.displa.2024.102845","url":null,"abstract":"<div><div>Images captured underwater typically exhibit color distortion, low brightness, and pseudo-haze due to light absorption and scattering. These degradations limit underwater image display and analysis, and still challenge the performance of current methods. To overcome these drawbacks, we propose a targeted and systematic method. Specifically, based on a key observation and extensive statistical analysis, we develop a Zero-Point Symmetry Prior (ZPSP): the histograms of channels a and b in the Lab color space, for color-balanced images, exhibit a symmetry distribution around the zero-point. Guided by the ZPSP, a Color Histogram Symmetry (CHS) method is proposed to balance color differences between channels a and b by ensuring they adhere to ZPSP. For channel L, a Reciprocal Mapping (RM) method is proposed to remove pseudo-haze and improve brightness, by aligning its reflectance and illumination components with the Dark Channel Prior (DCP) and Bright Channel Prior (BCP), respectively. Relatedly, it employs a divide-and-conquer strategy, distinguishing underwater image degradations in decomposed sub-images and tackling them individually. Notably, the above-proposed methods are integrated into a systematic enhancement framework, while focusing on targeted optimization for each type of degradation. Benefiting from the proposed strategy and methods, various degradations are individually optimized and mutually promoted, consistently producing visually pleasing results. Comprehensive experiments demonstrate that the proposed method exhibits remarkable performance on various underwater image datasets and applications, also showing good generalization ability. The code is available at: <span><span>https://github.com/CN-lifei/ZSRM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102845"},"PeriodicalIF":3.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating ASD in children through automatic analysis of paintings 通过自动分析绘画评估儿童自闭症
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-16 DOI: 10.1016/j.displa.2024.102850
Ji-Feng Luo , Zhijuan Jin , Xinding Xia , Fangyu Shi , Zhihao Wang , Chi Zhang
{"title":"Evaluating ASD in children through automatic analysis of paintings","authors":"Ji-Feng Luo ,&nbsp;Zhijuan Jin ,&nbsp;Xinding Xia ,&nbsp;Fangyu Shi ,&nbsp;Zhihao Wang ,&nbsp;Chi Zhang","doi":"10.1016/j.displa.2024.102850","DOIUrl":"10.1016/j.displa.2024.102850","url":null,"abstract":"<div><div>Autism spectrum disorder (ASD) is a hereditary neurodevelopmental disorder affecting individuals, families, and societies worldwide. Screening for ASD relies on specialized medical resources, and current machine learning-based screening methods depend on expensive professional devices and algorithms. Therefore, there is a critical need to develop accessible and easily implementable methods for ASD assessment. In this study, we are committed to finding such an ASD screening and rehabilitation assessment solution based on children’s paintings. From an ASD painting database, 375 paintings from children with ASD and 160 paintings from typically developing children were selected, and a series of image signal processing algorithms based on typical characteristics of children with ASD were designed to extract features from images. The effectiveness of extracted features was evaluated through statistical methods, and they were then classified using a support vector machine (SVM) and XGBoost (eXtreme Gradient Boosting). In 5-fold cross-validation, the SVM achieved a recall of 94.93%, a precision of 86.40%, an accuracy of 85.98%, and an AUC of 90.90%, while the XGBoost achieved a recall of 96.27%, a precision of 93.78%, an accuracy of 92.90%, and an AUC of 98.00%. This efficacy persists at a high level even during additional validation on a set of newly collected paintings. Not only did the performance surpass that of participated human experts, but the high recall rate, as well as its affordability, manageability, and ease of implementation, indicates potentiality in wide screening and rehabilitation assessment. All analysis code is public at GitHub: <span><span>dishangti/ASD-Painting-Pub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102850"},"PeriodicalIF":3.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using query semantic and feature transfer fusion to enhance cardinality estimating of property graph queries 利用查询语义和特征转移融合来增强属性图查询的核心估计能力
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-16 DOI: 10.1016/j.displa.2024.102854
Zhenzhen He , Tiquan Gu , Jiong Yu
{"title":"Using query semantic and feature transfer fusion to enhance cardinality estimating of property graph queries","authors":"Zhenzhen He ,&nbsp;Tiquan Gu ,&nbsp;Jiong Yu","doi":"10.1016/j.displa.2024.102854","DOIUrl":"10.1016/j.displa.2024.102854","url":null,"abstract":"<div><div>With the increasing complexity and diversity of query tasks, cardinality estimation has become one of the most challenging problems in query optimization. In this study, we propose an efficient and accurate cardinality estimation method to address the cardinality estimation problem in property graph queries, particularly in response to the current research gap regarding the neglect of contextual semantic features. We first propose formal representations of the property graph query and define its cardinality estimation problem. Then, through the query featurization, we transform the query into a vector representation that can be learned by the estimation model, and enrich the feature vector representation by the context semantic information of the query. We finally propose an estimation model for property graph queries, specifically introducing a feature information transfer module to dynamically control the information flow meanwhile achieving the model’s feature fusion and inference. Experimental results on three datasets show that the estimation model can accurately and efficiently estimate the cardinality of property graph queries, the mean Q_error and RMSE are reduced by about 30% and 25% than the state-of-art estimation models. The context semantics features of queries can improve the model’s estimation accuracy, the mean Q_error result is reduced by about 20% and the RMSE result is about 5%.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102854"},"PeriodicalIF":3.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Profiles of cybersickness symptoms 网络病症状概况
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-11 DOI: 10.1016/j.displa.2024.102853
Jonathan W. Kelly , Nicole L. Hayes , Taylor A. Doty , Stephen B. Gilbert , Michael C. Dorneich
{"title":"Profiles of cybersickness symptoms","authors":"Jonathan W. Kelly ,&nbsp;Nicole L. Hayes ,&nbsp;Taylor A. Doty ,&nbsp;Stephen B. Gilbert ,&nbsp;Michael C. Dorneich","doi":"10.1016/j.displa.2024.102853","DOIUrl":"10.1016/j.displa.2024.102853","url":null,"abstract":"<div><div>Cybersickness – discomfort caused by virtual reality (VR) – remains a significant problem that negatively affects the user experience. Research on individual differences in cybersickness has typically focused on overall sickness intensity, but a detailed understanding should include whether individuals differ in the relative intensity of cybersickness symptoms. This study used latent profile analysis (LPA) to explore whether there exist groups of individuals who experience common patterns of cybersickness symptoms. Participants played a VR game for up to 20 min. LPA indicated three groups with low, medium, and high overall cybersickness. Further, there were similarities and differences in relative patterns of nausea, disorientation, and oculomotor symptoms between groups. Disorientation was lower than nausea and oculomotor symptoms for all three groups. Nausea and oculomotor were experienced at similar levels within the high and low sickness groups, but the medium sickness group experienced more nausea than oculomotor. Characteristics of group members varied across groups, including gender, virtual reality experience, video game experience, and history of motion sickness. These findings identify distinct individual experiences in symptomology that go beyond overall sickness intensity, which could enable future interventions that target certain groups of individuals and specific symptoms.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102853"},"PeriodicalIF":3.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142444995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel heart rate estimation framework with self-correcting face detection for Neonatal Intensive Care Unit 用于新生儿重症监护室的新型心率估算框架与自校正人脸检测技术
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-11 DOI: 10.1016/j.displa.2024.102852
Kangyang Cao, Tao Tan, Zhengxuan Chen, Kaiwen Yang, Yue Sun
{"title":"A novel heart rate estimation framework with self-correcting face detection for Neonatal Intensive Care Unit","authors":"Kangyang Cao,&nbsp;Tao Tan,&nbsp;Zhengxuan Chen,&nbsp;Kaiwen Yang,&nbsp;Yue Sun","doi":"10.1016/j.displa.2024.102852","DOIUrl":"10.1016/j.displa.2024.102852","url":null,"abstract":"<div><div>Remote photoplethysmography (rPPG) is a non-invasive method for monitoring heart rate (HR) and other vital signs by measuring subtle facial color changes caused by blood flow variations beneath the skin, typically captured through video-based imaging. Current rPPG technology, which is optimized for ideal conditions, faces significant challenges in real-world clinical settings such as Neonatal Intensive Care Units (NICUs). These challenges primarily arise from the limitations of automatic face detection algorithms embedded in HR estimation frameworks, which have difficulty accurately detecting the faces of newborns. Additionally, variations in lighting conditions can significantly affect the accuracy of HR estimation. The combination of these positional changes and fluctuations in lighting significantly impacts the accuracy of HR estimation. To address the challenges of inadequate face detection and HR estimation in newborns, we propose a novel HR estimation framework that incorporates a Self-Correcting face detection module. Our HR estimation framework introduces an innovative rPPG value reference module to mitigate the effects of lighting variations, significantly reducing HR estimation error. The Self-Correcting module improves face detection accuracy by enhancing robustness to occlusions and position changes while automating the process to minimize manual intervention. Our proposed framework demonstrates notable improvements in both face detection accuracy and HR estimation, outperforming existing methods for newborns in NICUs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102852"},"PeriodicalIF":3.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy 突出对象排名:关于相对性学习的显著性模型和关于三重准确性的评估指标
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-10 DOI: 10.1016/j.displa.2024.102855
Yingchun Guo, Shu Chen, Gang Yan, Shi Di, Xueqi Lv
{"title":"Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy","authors":"Yingchun Guo,&nbsp;Shu Chen,&nbsp;Gang Yan,&nbsp;Shi Di,&nbsp;Xueqi Lv","doi":"10.1016/j.displa.2024.102855","DOIUrl":"10.1016/j.displa.2024.102855","url":null,"abstract":"<div><div>Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102855"},"PeriodicalIF":3.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DZ-SLAM: A SAM-based SLAM algorithm oriented to dynamic environments DZ-SLAM:面向动态环境的基于 SAM 的 SLAM 算法
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-10 DOI: 10.1016/j.displa.2024.102846
Zhe Chen , Qiuyu Zang , Kehua Zhang
{"title":"DZ-SLAM: A SAM-based SLAM algorithm oriented to dynamic environments","authors":"Zhe Chen ,&nbsp;Qiuyu Zang ,&nbsp;Kehua Zhang","doi":"10.1016/j.displa.2024.102846","DOIUrl":"10.1016/j.displa.2024.102846","url":null,"abstract":"<div><div>Precise localization is a fundamental prerequisite for the effective operation of Simultaneous Localization and Mapping (SLAM) systems. Traditional visual SLAM is based on static environments and therefore performs poorly in dynamic environments. While numerous visual SLAM methods have been proposed to address dynamic environments, these approaches are typically based on certain prior knowledge. This paper introduces DZ-SLAM, a dynamic SLAM algorithm that does not require any prior knowledge, based on ORB-SLAM3, to handle unknown dynamic elements in the scene. This work first introduces the FastSAM to enable comprehensive image segmentation. It then proposes an adaptive threshold-based dense optical flow approach to identify dynamic elements within the environment. Finally, combining FastSAM with optical flow method and embedding it into the SLAM framework to eliminate dynamic objects and improve positioning accuracy in dynamic environments. The experiment shows that compared with the original ORB-SLAM3 algorithm, the algorithm proposed in this paper reduces the absolute trajectory error by up to 96%; Compared to the most advanced algorithms currently available, the absolute trajectory error of our algorithm can be reduced by up to 46%. In summary, the proposed dynamic object segmentation method without prior knowledge can significantly reduce the positioning error of SLAM algorithm in various dynamic environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102846"},"PeriodicalIF":3.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pen-based vibrotactile feedback rendering of surface textures under unconstrained acquisition conditions 无约束采集条件下基于笔的表面纹理振动反馈渲染
IF 3.7 2区 工程技术
Displays Pub Date : 2024-10-09 DOI: 10.1016/j.displa.2024.102844
Miao Zhang , Dongyan Nie , Weizhi Nai , Xiaoying Sun
{"title":"Pen-based vibrotactile feedback rendering of surface textures under unconstrained acquisition conditions","authors":"Miao Zhang ,&nbsp;Dongyan Nie ,&nbsp;Weizhi Nai ,&nbsp;Xiaoying Sun","doi":"10.1016/j.displa.2024.102844","DOIUrl":"10.1016/j.displa.2024.102844","url":null,"abstract":"<div><div>Haptic rendering of surface textures enhances user immersion of human–computer interaction. However, strict input conditions and measurement methods limit the diversity of rendering algorithms. In this regard, we propose a neural network-based approach for vibrotactile haptic rendering of surface textures under unconstrained acquisition conditions. The method first encodes the interactions based on human perception characteristics, and then utilizes an autoregressive-based model to learn a non-linear mapping between the encoded data and haptic features. The interactions consist of normal forces and sliding velocities, while the haptic features are time–frequency amplitude spectrograms by short-time Fourier transform of the accelerations corresponding to the interactions. Finally, a generative adversarial network is employed to convert the generated time–frequency amplitude spectrograms into the accelerations. The effectiveness of the proposed approach is confirmed through numerical calculations and subjective experiences. This approach enables the rendering of a wide range of vibrotactile data for surface textures under unconstrained acquisition conditions, achieving a high level of haptic realism.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102844"},"PeriodicalIF":3.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信