Displays最新文献

筛选
英文 中文
VR-empowered interior design: Enhancing efficiency and quality through immersive experiences VR 助力室内设计:通过身临其境的体验提高效率和质量
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102887
Pengjun Wu , Yao Liu , Huijie Chen , Xiaowen Li , Hui Wang
{"title":"VR-empowered interior design: Enhancing efficiency and quality through immersive experiences","authors":"Pengjun Wu ,&nbsp;Yao Liu ,&nbsp;Huijie Chen ,&nbsp;Xiaowen Li ,&nbsp;Hui Wang","doi":"10.1016/j.displa.2024.102887","DOIUrl":"10.1016/j.displa.2024.102887","url":null,"abstract":"<div><div>Traditional interior design methods often struggle to meet the increasingly diverse needs of modern users. As a result, virtual reality (VR) technology has emerged as an innovative solution. This study explores the application of VR technology in interior design (ID), with a focus on enhancing efficiency and quality in the design process. To achieve this, we developed an ID platform based on VR technology and employed Level of Detail (LOD) techniques for 3D model simplification, significantly improving rendering efficiency and effectively reducing rendering times. Performance testing results indicate that the average rendering times decreased from 721.50 ms, 811.20 ms, and 748.10 ms to 335.80 ms, 431.10 ms, and 371.30 ms, respectively, while the frame rates (FPS) increased from 100.80, 85.00, and 93.40 to 107.50, 91.00, and 100.20. These experimental results demonstrate that VR-based design not only accelerates the design process but also significantly enhances the visual quality of interior spaces, providing users with a more immersive experience.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102887"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPFont: Stroke potential features embedded GAN for Chinese calligraphy font generation SPFont:用于生成中国书法字体的嵌入笔画潜力特征的 GAN
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102876
Fangmei Chen, Chen Wang, Xingchen Yao, Fuming Sun
{"title":"SPFont: Stroke potential features embedded GAN for Chinese calligraphy font generation","authors":"Fangmei Chen,&nbsp;Chen Wang,&nbsp;Xingchen Yao,&nbsp;Fuming Sun","doi":"10.1016/j.displa.2024.102876","DOIUrl":"10.1016/j.displa.2024.102876","url":null,"abstract":"<div><div>Chinese calligraphy font generation is an extremely challenging problem. Firstly, Chinese calligraphy fonts have complex structures. The accuracy and artistic quality of the generated fonts will be affected by the order and layout of the strokes as well as the relationships between them. Secondly, the number of Chinese characters is large, but existing calligraphy works are scarce. Hence, it is difficult to establish a comprehensive and high-quality Chinese calligraphy dataset. In this paper, we propose an unsupervised calligraphy font generation network SPFont. It is based on a generative adversarial network (GAN) framework. The generator includes a style feature encoder, a content feature encoder, a stroke potential feature fusion module (SPFM) and a decoder. The SPFM module, by overlaying lower-level style and content features, better preserves fine details of the font such as stroke thickness, curve shapes and other characteristics. The SPFM module and the extracted style features are fused and then fed into the decoder, allowing it to consider the influence of style, content and stroke potential simultaneously during the generation process. Experimental results demonstrate that our model generates Chinese calligraphy fonts with higher quality compared to previous methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102876"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HHGraphSum: Hierarchical heterogeneous graph learning for extractive document summarization HHGraphSum:用于提取文档摘要的分层异构图学习
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102884
Pengyi Hao , Cunqi Wu , Cong Bai
{"title":"HHGraphSum: Hierarchical heterogeneous graph learning for extractive document summarization","authors":"Pengyi Hao ,&nbsp;Cunqi Wu ,&nbsp;Cong Bai","doi":"10.1016/j.displa.2024.102884","DOIUrl":"10.1016/j.displa.2024.102884","url":null,"abstract":"<div><div>Extractive summarization aims to select important sentences from the document to generate a summary. However, current extractive document summarization methods fail to fully consider the semantic information among sentences and the various relations in the entire document. Therefore, a novel end-to-end framework named hierarchical heterogeneous graph learning for document summarization (HHGraphSum) is proposed in this paper. In this framework, a hierarchical heterogeneous graph is constructed for the whole document, where the representation of sentences is learnt by several levels of graph neural network. The combination of single-direction message passing and bidirectional message passing helps graph learning obtain effective relations among sentences and words. For capturing the rich semantic information, space–time collaborative learning is designed to generate the primary features of sentences which are enhanced in graph learning. For generating a less redundant and more precise summary, a LSTM based predictor and a blocking strategy are explored. Evaluations both on a single-document dataset and a multi-document dataset demonstrate the effectiveness of the HHGraphSum. The code of HHGraphSum is available on Github:<span><span>https://github.com/Devin100086/HHGraphSum</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102884"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning domain-adaptive palmprint anti-spoofing feature from multi-source domains 从多源域学习域自适应掌纹防欺骗特征
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102871
Chengcheng Liu , Huikai Shao , Dexing Zhong
{"title":"Learning domain-adaptive palmprint anti-spoofing feature from multi-source domains","authors":"Chengcheng Liu ,&nbsp;Huikai Shao ,&nbsp;Dexing Zhong","doi":"10.1016/j.displa.2024.102871","DOIUrl":"10.1016/j.displa.2024.102871","url":null,"abstract":"<div><div>Palmprint anti-spoofing is essential for securing palmprint recognition systems. Although some anti-spoofing methods excel on closed datasets, their ability to generalize across unknown domains is often limited. This paper introduces the Domain-Adaptive Palmprint Anti-Spoofing Network (DAPANet), which leverages multiple known spoofing domains to extract domain-invariant spoofing clues from unlabeled domains. DAPANet tackles the domain adaptation challenge using three strategies: global domain alignment, subdomain alignment, and the separation of distinct subdomains. The framework consists of a public feature extraction module, a domain adaptation module, a domain classifier, and a fusion classifier. Initially, the public feature extraction module extracts palmprint features. Subsequently, the domain adaptation module aligns target domain features with source domain features to generate domain-specific outputs. The domain classifier provides initial classifiable features, which are then integrated by DAPANet, employing a unified fusion classifier for decision-making. Comprehensive experiments conducted on XJTU-PalmReplay database across various cross-domain scenarios confirm the efficacy of the proposed method.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102871"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DefocusSR2: An efficient depth-guided and distillation-based framework for defocus images super-resolution DefocusSR2:基于深度引导和蒸馏的高效离焦图像超分辨率框架
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-15 DOI: 10.1016/j.displa.2024.102883
Qipei Li, Da Pan, Zefeng Ying, Qirong Liang, Ping Shi
{"title":"DefocusSR2: An efficient depth-guided and distillation-based framework for defocus images super-resolution","authors":"Qipei Li,&nbsp;Da Pan,&nbsp;Zefeng Ying,&nbsp;Qirong Liang,&nbsp;Ping Shi","doi":"10.1016/j.displa.2024.102883","DOIUrl":"10.1016/j.displa.2024.102883","url":null,"abstract":"<div><div>Existing image super-resolution (SR) methods often lead to oversharpening, particularly in defocused images. However, we have observed that defocused regions and focused regions present different levels of recovery difficulty. This observation opens up opportunities for more efficient enhancements. In this paper, we introduce DefocusSR2, an efficient framework designed for super-resolution of defocused images. DefocusSR2 consists of two main modules: Depth-Guided Segmentation (DGS) and Defocus-Aware Classify Enhance (DCE). In the DGS module, we utilize MobileSAM, guided by depth information, to accurately segment the input image and generate defocus maps. These maps provide detailed information about the locations of defocused areas. In the DCE module, we crop the defocus map and classify the segments into defocused and focused patches based on a predefined threshold. Through knowledge distillation and the fusion of blur kernel matching, the network retains the fuzzy kernel to reduce computational load. Practically, the defocused patches are fed into the Efficient Blur Match SR Network (EBM-SR), where the blur kernel is preserved to alleviate computational demands. The focused patches, on the other hand, are processed using more computationally intensive operations. Thus, DefocusSR2 integrates defocus classification and super-resolution within a unified framework. Experiments demonstrate that DefocusSR2 can accelerate most SR methods, reducing the FLOPs of SR models by approximately 70% while maintaining state-of-the-art SR performance.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102883"},"PeriodicalIF":3.7,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mambav3d: A mamba-based virtual 3D module stringing semantic information between layers of medical image slices Mambav3d:基于曼巴的虚拟三维模块,在各层医学图像切片之间串联语义信息
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-15 DOI: 10.1016/j.displa.2024.102890
Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei
{"title":"Mambav3d: A mamba-based virtual 3D module stringing semantic information between layers of medical image slices","authors":"Xiaoxiao Liu,&nbsp;Yan Zhao,&nbsp;Shigang Wang,&nbsp;Jian Wei","doi":"10.1016/j.displa.2024.102890","DOIUrl":"10.1016/j.displa.2024.102890","url":null,"abstract":"<div><div>High-precision medical image segmentation provides a reliable basis for clinical analysis and diagnosis. Researchers have developed various models to enhance the segmentation performance of medical images. Among these methods, two-dimensional models such as Unet exhibit a simple structure, low computational resource requirements, and strong local feature capture capabilities. However, their spatial information utilization is insufficient, limiting their segmentation accuracy. Three-dimensional models, such as 3D Unet, utilize spatial information more fully and are suitable for complex tasks, but they require high computational resources and have limited real-time performance. In this paper, we propose a virtual 3D module (Mambav3d) based on mamba, which introduces spatial information into 2D segmentation tasks to more fully integrate the 3D information of the image and further improve segmentation accuracy under conditions of low computational resource requirements. Mambav3d leverages the properties of hidden states in the state space model, combined with the shift of visual perspective, to incorporate semantic information between different anatomical planes in different slices of the same 3D sample. The voxel segmentation is converted to pixel segmentation to reduce model training data requirements and model complexity while ensuring that the model integrates 3D information and enhances segmentation accuracy. The model references the information from previous layers when labeling the current layer, thereby facilitating the transfer of semantic information between slice layers and avoiding the high computational cost associated with using structures such as Transformers between layers. We have implemented Mambav3d on Unet and evaluated its performance on the BraTs, Amos, and KiTs datasets, demonstrating superiority over other state-of-the-art methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102890"},"PeriodicalIF":3.7,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Luminance decomposition and Transformer based no-reference tone-mapped image quality assessment 基于亮度分解和变换器的无参考色调映射图像质量评估
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-14 DOI: 10.1016/j.displa.2024.102881
Zikang Chen , Zhouyan He , Ting Luo , Chongchong Jin , Yang Song
{"title":"Luminance decomposition and Transformer based no-reference tone-mapped image quality assessment","authors":"Zikang Chen ,&nbsp;Zhouyan He ,&nbsp;Ting Luo ,&nbsp;Chongchong Jin ,&nbsp;Yang Song","doi":"10.1016/j.displa.2024.102881","DOIUrl":"10.1016/j.displa.2024.102881","url":null,"abstract":"<div><div>Tone-Mapping Operators (TMOs) play a crucial role in converting High Dynamic Range (HDR) images into Tone-Mapped Images (TMIs) with standard dynamic range for optimal display on standard monitors. Nevertheless, TMIs generated by distinct TMOs may exhibit diverse visual artifacts, highlighting the significance of TMI Quality Assessment (TMIQA) methods in predicting perceptual quality and guiding advancements in TMOs. Inspired by luminance decomposition and Transformer, a new no-reference TMIQA method based on deep learning is proposed in this paper, named LDT-TMIQA. Specifically, a TMI will change under the influence of different TMOs, potentially resulting in either over-exposure or under-exposure, leading to structure distortion and changes in texture details. Therefore, we first decompose the luminance channel of a TMI into a base layer and a detail layer that capture structure information and texture information, respectively. Then, they are employed with the TMI collectively as inputs to the Feature Extraction Module (FEM) to enhance the availability of prior information on luminance, structure, and texture. Additionally, the FEM incorporates the Cross Attention Prior Module (CAPM) to model the interdependencies among the base layer, detail layer, and TMI while employing the Iterative Attention Prior Module (IAPM) to extract multi-scale and multi-level visual features. Finally, a Feature Selection Fusion Module (FSFM) is proposed to obtain final effective features for predicting the quality scores of TMIs by reducing the weight of unnecessary features and fusing the features of different levels with equal importance. Extensive experiments on the publicly available TMI benchmark database indicate that the proposed LDT-TMIQA reaches the state-of-the-art level.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102881"},"PeriodicalIF":3.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise subpixel luminance extraction method for De-Mura of AMOLED displays 用于 AMOLED 显示器去村效应的精确亚像素亮度提取方法
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-14 DOI: 10.1016/j.displa.2024.102889
Zhong Zheng, Zhaohua Zhou, Ruipeng Chen, Jiajie Liu, Chun Liu, Lirong Zhang, Lei Zhou, Miao Xu, Lei Wang, Weijing Wu, Junbiao Peng
{"title":"Precise subpixel luminance extraction method for De-Mura of AMOLED displays","authors":"Zhong Zheng,&nbsp;Zhaohua Zhou,&nbsp;Ruipeng Chen,&nbsp;Jiajie Liu,&nbsp;Chun Liu,&nbsp;Lirong Zhang,&nbsp;Lei Zhou,&nbsp;Miao Xu,&nbsp;Lei Wang,&nbsp;Weijing Wu,&nbsp;Junbiao Peng","doi":"10.1016/j.displa.2024.102889","DOIUrl":"10.1016/j.displa.2024.102889","url":null,"abstract":"<div><div>Currently, Mura defects have a significant impact on the yield of AMOLED panels, and De-Mura plays a critical role in the compensation. To enhance the applicability of the subpixel luminance extraction method in De-Mura and to address inaccuracies caused by aperture diffraction limit and geometric defocusing in camera imaging, this paper proposes a precise extraction method based on effective area. We establish the concept of the effective area first and then determine the effective area of subpixel imaging on the camera sensor by incorporating the circle of confusion (CoC) caused by aperture diffraction limits and geometric defocusing. Finally, more precise luminance information is obtained. Results show that, after compensation, the Mura on the white screen is almost eliminated subjectively. Objectively, by constructing normalized luminance curves for subpixels in Mura regions, the standard deviation indicates that our method outperforms the traditional whole-pixel method, improving uniformity by approximately 50%.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102889"},"PeriodicalIF":3.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Font and background color combinations influence recognition efficiency: A novel method via primary color Euclidean distance and response surface analysis 字体和背景颜色组合会影响识别效率:通过原色欧氏距离和响应面分析的新方法
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-12 DOI: 10.1016/j.displa.2024.102873
Wenchao Zhu , Zeliang Cheng , Qi Wang , Jing Du , Yingzi Lin
{"title":"Font and background color combinations influence recognition efficiency: A novel method via primary color Euclidean distance and response surface analysis","authors":"Wenchao Zhu ,&nbsp;Zeliang Cheng ,&nbsp;Qi Wang ,&nbsp;Jing Du ,&nbsp;Yingzi Lin","doi":"10.1016/j.displa.2024.102873","DOIUrl":"10.1016/j.displa.2024.102873","url":null,"abstract":"<div><div>The readability of human–computer interfaces impacts the users’ visual performance while using electronic devices, which gains inadequate attention. This situation is critical during high-stress conditions such as firefighting, where accurate and fast information processing is critical. This study addresses how font and background color combinations on Liquid Crystal displays (LCDs) affect recognition efficiency. A novel concept, primary color Euclidean distance (PCED), is introduced and testified under a repeated-measures experiment. Three factors were investigated: background color (black, white), font color (red, green, blue), and PCEDs. A total of 24 participants were recruited. Results demonstrate that color combinations with specific PCED values can substantially impact recognition efficiency. By using RSA, this study modelled the response time in a generalized mathematical model, which is response surface analysis. Results showed that blue font colors under a black background showed the longest response time. This study also explored the influence of physical stress on recognition efficiency, revealing a latency of about 100 ms across all color combinations. The findings offer a methodological advancement in understanding the effects of color combinations in digital displays, setting the stage for future research in diverse demographic and technological contexts, including mixed reality.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102873"},"PeriodicalIF":3.7,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLDBF: Global and local dual-branch fusion network for no-reference point cloud quality assessment GLDBF:用于无参照点云质量评估的全局和局部双分支融合网络
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-09 DOI: 10.1016/j.displa.2024.102882
Zhichao Chen , Shuyu Xiao , Yongfang Wang , Yihan Wang , Hongming Cai
{"title":"GLDBF: Global and local dual-branch fusion network for no-reference point cloud quality assessment","authors":"Zhichao Chen ,&nbsp;Shuyu Xiao ,&nbsp;Yongfang Wang ,&nbsp;Yihan Wang ,&nbsp;Hongming Cai","doi":"10.1016/j.displa.2024.102882","DOIUrl":"10.1016/j.displa.2024.102882","url":null,"abstract":"<div><div>No-reference Point Cloud Quality Assessment (NR-PCQA) is a challenge in the field of media quality assessment, such as inability to accurately capture quality-related features due to the unique scattered structure of points and less considering global features and local features jointly in the existing no-reference PCQA metrics. To address these challenges, we propose a Global and Local Dual-Branch Fusion (GLDBF) network for no-reference point cloud quality assessment. Firstly, sparse convolution is used to extract the global quality feature of distorted Point Clouds (PCs). Secondly, graph weighted PointNet++ is proposed to extract the multi-level local features of point cloud, and the offset attention mechanism is further used to enhance local effective features. Transformer-based fusion module is also proposed to fuse multi-level local features. Finally, we joint the global and local dual branch fusion modules via multilayer perceptron to predict the quality score of distorted PCs. Experimental results show that the proposed algorithm can achieves state-of-the-art performance compared with existing methods in assessing the quality of distorted PCs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102882"},"PeriodicalIF":3.7,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信