Displays最新文献

筛选
英文 中文
MMFS-CF: A personalized data-driven credit card fraud detection model based on multi-modal multi-objective feature subset selection MMFS-CF:基于多模态多目标特征子集选择的个性化数据驱动信用卡欺诈检测模型
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-08 DOI: 10.1016/j.displa.2025.103206
Nana Zhang , Kun Zhu , Chudong Wu , Dandan Zhu
{"title":"MMFS-CF: A personalized data-driven credit card fraud detection model based on multi-modal multi-objective feature subset selection","authors":"Nana Zhang ,&nbsp;Kun Zhu ,&nbsp;Chudong Wu ,&nbsp;Dandan Zhu","doi":"10.1016/j.displa.2025.103206","DOIUrl":"10.1016/j.displa.2025.103206","url":null,"abstract":"<div><div>Credit card fraud detection (CCFD) is a critical research direction in the field of financial risk prevention and control, aiming to protect the interests of consumers and financial institutions by identifying suspicious transactions. Nevertheless, due to the high privacy and sensitivity of some transaction features, as well as the difficulty in extracting some transaction features or the high cost of obtaining them, the requirement for personalized transaction features in the CCFD scenario cannot be met. Additionally, the original transaction data often has irrelevant and redundant features, which are not conducive to the improvement of CCFD performance. Therefore, we present a personalized data-driven CCFD model named MMFS-CF based on multi-modal multi-objective feature subset selection, which focuses on two key optimization objectives: minimizing the count of transaction features and maximizing CCFD performance (i.e., AUC). Specifically, we develop a dynamically adaptive guidance vector mechanism through the construction of a multi-subpopulation collaborative evolution framework. This mechanism adaptively directs subpopulations to converge toward unexplored regions within the decision space, guided by real-time population density information. Furthermore, it integrates two key components: a genetic operation enhancement strategy by embedding guidance vectors to improve population diversity and accelerate convergence, and a guidance vector-driven environmental selection update mechanism aimed at refining solution quality. A key innovation lies in its personalized feature selection paradigm, enabling decision-makers to flexibly select from alternative feature subsets tailored to real-world constraints (e.g., privacy concerns or computational limitations), all of which do not affect CCFD performance. We showcase the detection capabilities of MMFS-CF using a large-scale private commerce dataset as well as four publicly available datasets. The experimental findings validate that MMFS-CF can deliver superior CCFD performance and highlight its multi-modal efficacy.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103206"},"PeriodicalIF":3.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of simulated bobbing and step sounds on cybersickness and presence in virtual reality 在虚拟现实中,模拟摇摆和脚步声音对晕机和存在感的影响
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-08 DOI: 10.1016/j.displa.2025.103202
You-Sheng Zhang, Li-Chen Ou
{"title":"Effects of simulated bobbing and step sounds on cybersickness and presence in virtual reality","authors":"You-Sheng Zhang,&nbsp;Li-Chen Ou","doi":"10.1016/j.displa.2025.103202","DOIUrl":"10.1016/j.displa.2025.103202","url":null,"abstract":"<div><div>Based on a previous study that identified the importance of vertical oscillation and lateral rotation in simulating natural bobbing movements in virtual reality (VR), this study employed two psychophysical experiments to evaluate the impacts of the bobbing mechanism in more dynamic virtual environments. Experiment 1 investigated forward-only movement in VR, assessing whether simulated bobbing and step sounds could mitigate cybersickness and enhance presence. The experimental results revealed that while simulated bobbing alone did not significantly reduce cybersickness, the addition of bobbing and step sounds could improve the sense of presence. Experiment 2 extended the scope to unconstrained movement in all directions (on the ground only), examining how increased disorientation and postural instability affect cybersickness and presence. Results showed that unconstrained movement led to higher levels of cybersickness, particularly at higher velocities of movement in VR, and that simulated bobbing seemed to mitigate some level of cybersickness during running. Sex differences were observed, with female subjects showing greater susceptibility to cybersickness but benefiting more from the simulated bobbing during running. Additionally, variations among subjects highlighted the importance of personalised adjustments to enhance VR experiences.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103202"},"PeriodicalIF":3.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145046150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PointPPE: A precise recognition method for complex machining features based on point cloud analysis network with polynomial positional encoding PointPPE:一种基于多项式位置编码的点云分析网络的复杂加工特征精确识别方法
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-07 DOI: 10.1016/j.displa.2025.103214
Guiyu Jiang , Bin Xue , Zhongbin Xu , Xiaodong Ruan , Pengcheng Nie , Xiang Zhou , Zhuoxiang Zhao
{"title":"PointPPE: A precise recognition method for complex machining features based on point cloud analysis network with polynomial positional encoding","authors":"Guiyu Jiang ,&nbsp;Bin Xue ,&nbsp;Zhongbin Xu ,&nbsp;Xiaodong Ruan ,&nbsp;Pengcheng Nie ,&nbsp;Xiang Zhou ,&nbsp;Zhuoxiang Zhao","doi":"10.1016/j.displa.2025.103214","DOIUrl":"10.1016/j.displa.2025.103214","url":null,"abstract":"<div><div>Machining feature recognition is a pivotal step of computer-aided manufacturing, providing the analytical foundation for subsequent machining processes. However, the insufficient utilization of point cloud positional information and redundant information in hierarchical network learning hinder the precise recognition capability of complex features. To address these problems, this work introduces an improved machining feature recognition method, termed PointPPE. Given the precision parts’ feature complexity and similarity, the polynomial position encoding module is designed to learn geometric structures efficiently to encode point cloud position information. A channel attention context fusion module is developed to enhance local feature analysis through channel feature weights assignment and contextual information integration. The results demonstrate that PointPPE exhibits precise recognition capability on constructed precision mold part point cloud datasets, with an instance mean Intersection over Union (IoU) of 90.57%, and shows great generalization on the ShapeNetPart dataset, with class and instance mean IoUs reaching 83.9% and 86.0%, respectively, manifesting superior prospects for complex feature recognition in advanced manufacturing.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103214"},"PeriodicalIF":3.4,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Cross-modal Prompt Learning Network for Artificial Intelligence Generated Image Quality Assessment 用于人工智能生成图像质量评估的深度跨模态提示学习网络
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-06 DOI: 10.1016/j.displa.2025.103208
Yang Lu , Shuangyao Han , Zilu Zhou , Zifan Yang , Gaowei Zhang , Shaohui Jin , Xiaoheng Jiang , Mingliang Xu
{"title":"A Deep Cross-modal Prompt Learning Network for Artificial Intelligence Generated Image Quality Assessment","authors":"Yang Lu ,&nbsp;Shuangyao Han ,&nbsp;Zilu Zhou ,&nbsp;Zifan Yang ,&nbsp;Gaowei Zhang ,&nbsp;Shaohui Jin ,&nbsp;Xiaoheng Jiang ,&nbsp;Mingliang Xu","doi":"10.1016/j.displa.2025.103208","DOIUrl":"10.1016/j.displa.2025.103208","url":null,"abstract":"<div><div>In recent years, multi-modal vision–language pre-trained models have been extensively adopted as foundational components for developing advanced Artificial Intelligence (AI) systems in computer vision applications. Previous approaches have advanced Artificial Intelligence Generated Image Quality Assessment (AGIQA) research via text-based or visual prompt learning, yet most methods remain constrained to a single modality (language or vision), overlooking the interplay between text and image. To address this issue, we propose a Deep Cross-Modal Prompt Learning Network (DCMPLN) for AGIQA. This model introduces a Multimodal Prompt Attention (MPA) module, employing multi-head attention to enhance the integration of textual and visual prompts. Furthermore, an Image Adapter module is incorporated into the visual pathway to extract novel features and fine-tune pre-trained ones using residual-style fusion. Experimental results on multiple generated image datasets demonstrate that the proposed method outperforms existing state-of-the-art image quality assessment models.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103208"},"PeriodicalIF":3.4,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No-reference image quality assessment based on multi-scale dynamic modulation and degradation information 基于多尺度动态调制和退化信息的无参考图像质量评价
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-06 DOI: 10.1016/j.displa.2025.103207
Yongcan Zhao , Yinghao Zhang , Tianfeng Xia , Tianhuan Huang , Xianye Ben , Lei Chen
{"title":"No-reference image quality assessment based on multi-scale dynamic modulation and degradation information","authors":"Yongcan Zhao ,&nbsp;Yinghao Zhang ,&nbsp;Tianfeng Xia ,&nbsp;Tianhuan Huang ,&nbsp;Xianye Ben ,&nbsp;Lei Chen","doi":"10.1016/j.displa.2025.103207","DOIUrl":"10.1016/j.displa.2025.103207","url":null,"abstract":"<div><div>Image quality assessment is a fundamental problem in image processing, but the complex and varied distortions present in real-world images often affect the model for accurate quality scoring. To address these issues, this paper presents a novel no-reference image quality assessment method based on multi-scale dynamic modulation and gated fusion (MDM-GFIQA), which jointly captures and fuses degradation and distortion features to predict image quality scores more accurately. Specifically, shallow features are first extracted using a pre-trained feature extractor. To explore more deeply perceptual distortion features, we introduce the multi-scale adaptive feature modulation (MsAFM) block into the perceptual network. The MsAFM processes spatial information at different scales in parallel through multiple channels and combines with a multi-branch convolutional block (MBCB), which enables the network sensitive to local features and global information. The comparative learning auxiliary branch (CLAB) is constructed by supervised contrast learning to acquire rich degraded features for guiding the distorted features extracted by the perceptual network. The outputs of these two streams are then merged by our proposed dynamic fusion enhancement module (DFEM), which focuses on key distortion information before passing the fused features to a regression network that predicts the final quality score. Extensive experiments on seven publicly available databases demonstrate the superior performance of the proposed model over several state-of-the-art methods, i.e., achieving the SRCC values of 0.929 (vs. 0.898 in TID2013) and 0.887 (vs. 0.875 in LIVEC).</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103207"},"PeriodicalIF":3.4,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145046152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OrthoMatch-Net: Unsupervised registration of orthodontic dental point clouds via hierarchical attention feature modeling and bidirectional matching mechanism OrthoMatch-Net:基于分层关注特征建模和双向匹配机制的正畸牙点云无监督配准
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-04 DOI: 10.1016/j.displa.2025.103203
Shanshan Huang , Wenkang Chen , Ni Liao , Xuejun Zhang , Ganxin Ouyang
{"title":"OrthoMatch-Net: Unsupervised registration of orthodontic dental point clouds via hierarchical attention feature modeling and bidirectional matching mechanism","authors":"Shanshan Huang ,&nbsp;Wenkang Chen ,&nbsp;Ni Liao ,&nbsp;Xuejun Zhang ,&nbsp;Ganxin Ouyang","doi":"10.1016/j.displa.2025.103203","DOIUrl":"10.1016/j.displa.2025.103203","url":null,"abstract":"<div><div>Accurate 3D dental point cloud registration is a crucial task for monitoring therapeutic progress and evaluating treatment outcomes. However, existing methods struggle to align dental structures owing to their intricate, highly similar shapes, as well as noise and pose variations in clinical environments, and are hindered by inadequate feature extraction and insufficient modeling of feature interactions. To tackle these challenges, we propose OrthoMatch-Net, an innovative unsupervised framework for dental point cloud registration, whose core contributions lie in two novel designs: (1) hierarchical attention feature modeling and (2) bidirectional matching mechanism, aimed at achieving robust alignment of pre- and post-treatment dental point clouds. The proposed hierarchical attention feature modeling employs transformation-invariant guided cross-attention to enhance local feature aggregation. It further captures global structural relationships through the window transformer. Moreover, a feedback interaction mechanism is introduced to enable feature fusion across hierarchical levels, thereby improving discriminative representation robustness for dental registration. Simultaneously, the bidirectional matching mechanism reinforces geometric consistency by learning key point correspondences in both directions (source-to-target and target-to-source). It leverages local structural consistency to weight and filter the matched pairs, effectively enhancing the symmetry and stability of the registration process. Extensive experiments on clinical dental datasets demonstrate that OrthoMatch-Net outperforms state-of-the-art methods with sub-millimeter accuracy across multiple metrics. It also exhibits strong robustness under noise perturbations, offering a practical and reliable solution for improving orthodontic treatment precision and supporting clinical decision-making. To facilitate further study, our source code and the pretrained models will be released at <span><span>https://github.com/shanshanhuang2023/OrthoMatch-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103203"},"PeriodicalIF":3.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing trichromatic color experience for color-deficient observers through gamut reconstruction and mapping 通过色域重建和映射,增强色差观察者的三色体验
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-04 DOI: 10.1016/j.displa.2025.103209
Xiaojie Zhao , Shuxin Zhao , Zhenxing Li , Qi Dai
{"title":"Enhancing trichromatic color experience for color-deficient observers through gamut reconstruction and mapping","authors":"Xiaojie Zhao ,&nbsp;Shuxin Zhao ,&nbsp;Zhenxing Li ,&nbsp;Qi Dai","doi":"10.1016/j.displa.2025.103209","DOIUrl":"10.1016/j.displa.2025.103209","url":null,"abstract":"<div><div>Color vision deficiency (CVD) affects over 200 million people globally, impairing their ability to perceive and distinguish colors, which hampers performance in color-related tasks. Therefore, developing technologies that enable color-deficient observers (CDOs) to experience colors similarly to color-normal observers (CNOs) is of great importance. In this work, we propose a novel technology that can provide CDOs with a normal trichromatic color experience through a three-stage process. First, a physiologically-based CVD simulation model is employed to estimate the perceivable color gamut for CDOs, considering the display backlight spectrum, and the type and degree of deficiency. Second, an optimized backlight spectrum is designed to reconstruct the color gamut of displays for CDOs, ensuring its coverage to encompass the standard color gamut such as sRGB. Finally, images are mapped from the gamut of CNOs to the reconstructed gamut of CDOs, enabling them to perceive colors in a manner that closely approximates normal trichromatic vision. Simulation results demonstrate that the proposed technology enables CDOs to perceive a wider range of colors, potentially surpassing the sRGB gamut. Quantitative image quality evaluations show significant improvements in contrast, naturalness, and chromaticity of the enhanced images, particularly for CDOs with mild to moderate deficiencies. In addition, a trade-off strategy is introduced to balance color gamut enhancement and spectral luminous efficacy, providing guidance for designing optimized backlight spectra. This technology has the potential for integration into display systems and wearable devices, offering real-time trichromatic color experience for CDOs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103209"},"PeriodicalIF":3.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145003998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scanpath prediction in panoramic videos through multimodal fusion 通过多模态融合的全景视频扫描路径预测
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-03 DOI: 10.1016/j.displa.2025.103199
Yucheng Zhu , Yu Wang , Weimin Zhang , Jialiang Chen , Yunhao Li
{"title":"Scanpath prediction in panoramic videos through multimodal fusion","authors":"Yucheng Zhu ,&nbsp;Yu Wang ,&nbsp;Weimin Zhang ,&nbsp;Jialiang Chen ,&nbsp;Yunhao Li","doi":"10.1016/j.displa.2025.103199","DOIUrl":"10.1016/j.displa.2025.103199","url":null,"abstract":"<div><div>Predicting scanpaths for panoramic visual stimuli presents a significant challenge due to the extensive field of view, the high resolution of panoramic content, and the complexity of human cognitive behavior. Accurate scanpath prediction holds substantial promise for applications such as quality adaptation strategies in the capture, processing, storage, and streaming of omnidirectional media. Despite its importance, limited studies have explored scanpath prediction in panoramic video stimuli that integrate both visual and auditory modalities. To address this gap, we perform the scanpath prediction in panoramic videos through multi-modality modeling using long short-term memory (LSTM) and Transformer based deep-learning networks. With the rapid advancement of DNNs, LSTM and Transformer based architectures have become pivotal in sequence-to-sequence tasks, significantly enhancing scanpath prediction capabilities. We propose two multi-modal prediction schemes. The first model, LSSCAN, employs a LSTM-based model to generate incrementally refined prediction outputs. The second model, TRSCAN, employs a transformer-based architecture, integrating contextual information through self-attention and cross-attention mechanisms to enhance predictive accuracy. Experimental results demonstrate that LSSCAN excels at capturing and modeling inertial patterns in scanpath prediction, while TRSCAN achieves superior performance in leveraging visual contextual information and making long-term predictions.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103199"},"PeriodicalIF":3.4,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PD-RC: Perception-driven rate control for panoramic video coding based on energy distribution optimization 基于能量分布优化的全景视频编码感知驱动速率控制
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-02 DOI: 10.1016/j.displa.2025.103198
Linyun Liu , Tiansong Li , Huijuan Zhao , Shuangjiang He , Jiabao Zhu , Jinhao Kuang , Li Yu
{"title":"PD-RC: Perception-driven rate control for panoramic video coding based on energy distribution optimization","authors":"Linyun Liu ,&nbsp;Tiansong Li ,&nbsp;Huijuan Zhao ,&nbsp;Shuangjiang He ,&nbsp;Jiabao Zhu ,&nbsp;Jinhao Kuang ,&nbsp;Li Yu","doi":"10.1016/j.displa.2025.103198","DOIUrl":"10.1016/j.displa.2025.103198","url":null,"abstract":"<div><div>In immersive panoramic video (PV) encoding scenarios, PV exhibits larger flat regions compared to traditional videos. The existing intra-rate control methods in Versatile Video Coding (VVC) allocate the bitrate based on the construction of weights according to the energy distribution characteristics of different encoding blocks. However, in reality, the human visual system (HVS) is not sensitive to blocks with many flat regions in perception, which leads to excessive allocation of bitrate in insensitive regions. On the contrary, insufficient bitrate allocation in sensitive regions leads to the inability to achieve better reconstruction quality. To address this challenge, we propose a Perception-Driven Rate Control (PD-RC) strategy for panoramic video encoding based on energy distribution optimization, which makes the intra-rate control closer to the perception habits of HVS. Firstly, we propose a low-complexity filtering method guided by rate–distortion performance to optimize the energy distribution of I-frame features. Subsequently, leveraging the optimized perception features of the energy distribution, a perception-driven intra-mode coding-tree-unit-level rate control strategy is proposed to improve the coding performance for PV. Extensive evaluations show the performance of PD-RC over the state-of-the-art rate control methods of VVC. Specifically, in all-intra encoding mode, the average bitrate savings of PD-RC is −5.002%, while the average gain in weighted-spherically quality is 0.239 dB, with a rate exceeding the upper limit as low as 2%, and a reduction in encoding complexity gain of −0.163%. PD-RC effectively improves the rate control performance of PV intra-frame coding while saving computational overhead. It is significant for optimizing data transmission efficiency, enhancing video quality, and reducing storage costs. The source code will be available at <span><span>https://github.com/liulinyun324/PD-RC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103198"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145003997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The relationship between vibration and shape: A study on visual-tactile cross-modal correspondences 振动与形状的关系:视觉-触觉跨模态对应关系的研究
IF 3.4 2区 工程技术
Displays Pub Date : 2025-09-02 DOI: 10.1016/j.displa.2025.103211
Jutao Li , Zixuan Zhang , Yanqun Huang , Jingxuan Yuan , Ningyuan Wang
{"title":"The relationship between vibration and shape: A study on visual-tactile cross-modal correspondences","authors":"Jutao Li ,&nbsp;Zixuan Zhang ,&nbsp;Yanqun Huang ,&nbsp;Jingxuan Yuan ,&nbsp;Ningyuan Wang","doi":"10.1016/j.displa.2025.103211","DOIUrl":"10.1016/j.displa.2025.103211","url":null,"abstract":"<div><div>Tactile and visual feedback are common forms of feedback in existing electronic products. Sensory differences can increase cognitive load, impacting user experience. This study investigates the cross-modal relationship between vibration and shape in tactile-visual perception. We conducted two experiments in which participants experienced vibrations of varying frequencies and amplitudes, then selected corresponding shapes and adjusted their roundness based on their perceptions. The results show that as shapes become more circular, the associated vibration frequency and amplitude decrease. Additionally, there is a negative correlation between shape roundness and vibration amplitude, while the relationship with shape roundness initially decreases and then increases as vibration frequency rises. These findings can assist developers in designing more effective tactile and visual interaction feedback, achieving sensory coherence, and enhancing product usability and comfort.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103211"},"PeriodicalIF":3.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145004126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信