Displays最新文献

筛选
英文 中文
ITSRN++: Stronger and better implicit transformer network for screen content image continuous super-resolution itsrn++:更强更好的隐式变压器网络,用于屏幕内容图像连续超分辨率
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-22 DOI: 10.1016/j.displa.2024.102865
Sheng Shen , Huanjing Yue , Kun Li , Jingyu Yang
{"title":"ITSRN++: Stronger and better implicit transformer network for screen content image continuous super-resolution","authors":"Sheng Shen ,&nbsp;Huanjing Yue ,&nbsp;Kun Li ,&nbsp;Jingyu Yang","doi":"10.1016/j.displa.2024.102865","DOIUrl":"10.1016/j.displa.2024.102865","url":null,"abstract":"<div><div>Nowadays, online screen sharing and remote cooperation are becoming ubiquitous. However, the screen content may be downsampled and compressed during transmission, while it may be displayed on large screens or the users would zoom in for detail observation at the receiver side. Therefore, developing a strong and effective screen content image (SCI) super-resolution (SR) method is demanded. We observe that the weight-sharing upsampler (such as deconvolution or pixel shuffle) could be harmful to sharp and thin edges in SCIs, and the fixed scale upsampler makes it inflexible to fit screens with various sizes. To solve this problem, we propose an implicit transformer network for SCI continuous SR (termed as ITSRN++). Specifically, we propose a modulation based transformer as the upsampler, which modulates the pixel features in discrete space via a periodic nonlinear function to generate features in continuous space. To better restore the high-frequency details in screen content images, we further propose dual branch block (DBB) as the feature extraction backbone, where convolution and attention branches are utilized parallelly on the same linear transformed value. Besides, we construct a large-scale SCI2K dataset to facilitate the research on SCI SR. Experimental results on nine datasets demonstrate that the proposed method achieves state-of-the-art performance for SCI SR and also works well for natural image SR.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102865"},"PeriodicalIF":3.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Orientation stabilization of light-controlling layers using liquid crystal polymers 利用液晶聚合物稳定光控层的方向
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-21 DOI: 10.1016/j.displa.2024.102893
Xinmin Yu , Tangwu Li , Jingxin Sang , Ming Xiao , Jianhua Shang , Jiatong Sun
{"title":"Orientation stabilization of light-controlling layers using liquid crystal polymers","authors":"Xinmin Yu ,&nbsp;Tangwu Li ,&nbsp;Jingxin Sang ,&nbsp;Ming Xiao ,&nbsp;Jianhua Shang ,&nbsp;Jiatong Sun","doi":"10.1016/j.displa.2024.102893","DOIUrl":"10.1016/j.displa.2024.102893","url":null,"abstract":"<div><div>Photo-alignment technology provides an advanced optical alignment method and high alignment quality compared to conventional rubbing techniques. However, challenges of photo-stability and response speed have hindered their practical application. This work is concerned with the stability of the photo-alignment layer, with azo dyes (SD1) employed as the photo-alignment layer material. Two solutions were investigated: the passivation of SD1 films with reactive mesogen (RM) material RM257 and the utilization of polymer–azo dye composites in lieu of the photo-alignment layer. Experimental methods, including spin-coating and UV polymerization, were employed, while optimal monomer concentrations were determined by photo-stability tests. The results demonstrate that the passivation layer enhances photo-stability. Atomic force microscopy (AFM) is employed to assess film roughness and enhance film quality. Furthermore, electro-optical tests indicate that the approach has no negative impact on the performance of LCD. These findings provide a more optimized approach to stabilizing liquid crystal (LC) alignment for advanced display technologies.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102893"},"PeriodicalIF":3.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-spatial interaction network for gaze estimation 用于凝视估计的频率-空间交互网络
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-21 DOI: 10.1016/j.displa.2024.102878
Yuanning Jia , Zhi Liu , Ying Lv , Xiaofeng Lu , Xuefeng Liu , Jie Chen
{"title":"Frequency-spatial interaction network for gaze estimation","authors":"Yuanning Jia ,&nbsp;Zhi Liu ,&nbsp;Ying Lv ,&nbsp;Xiaofeng Lu ,&nbsp;Xuefeng Liu ,&nbsp;Jie Chen","doi":"10.1016/j.displa.2024.102878","DOIUrl":"10.1016/j.displa.2024.102878","url":null,"abstract":"<div><div>Gaze estimation is a fundamental task in the field of computer vision, which determines the direction a person is looking at. With advancements in Convolutional Neural Networks (CNNs) and the availability of large-scale datasets, appearance-based models have made significant progress. Nonetheless, CNNs exhibit limitations in extracting global information from features, resulting in a constraint on gaze estimation performance. Inspired by the properties of the Fourier transform in signal processing, we propose the Frequency-Spatial Interaction network for Gaze estimation (FSIGaze), which integrates residual modules and Frequency-Spatial Synergistic (FSS) modules. To be specific, its FSS module is a dual-branch structure with a spatial branch and a frequency branch. The frequency branch employs Fast Fourier Transformation to transfer a latent representation to the frequency domain and applies adaptive frequency filter to achieve an image-size receptive field. The spatial branch, on the other hand, can extract local detailed features. Acknowledging the synergistic benefits of global and local information in gaze estimation, we introduce a Dual-domain Interaction Block (DIB) to enhance the capability of the model. Furthermore, we implement a multi-task learning strategy, incorporating eye region detection as an auxiliary task to refine facial features. Extensive experiments demonstrate that our model surpasses other state-of-the-art gaze estimation models on three three-dimensional (3D) datasets and delivers competitive results on two two-dimensional (2D) datasets.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102878"},"PeriodicalIF":3.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdapSyn: Anomaly detection based on triplet training with adaptive anomaly synthesis AdapSyn:基于自适应异常合成的三元组训练的异常检测
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-18 DOI: 10.1016/j.displa.2024.102885
Shijie Zhou , Chunyu Lin , Zisong Chen , Baoqing Guo , Yao Zhao
{"title":"AdapSyn: Anomaly detection based on triplet training with adaptive anomaly synthesis","authors":"Shijie Zhou ,&nbsp;Chunyu Lin ,&nbsp;Zisong Chen ,&nbsp;Baoqing Guo ,&nbsp;Yao Zhao","doi":"10.1016/j.displa.2024.102885","DOIUrl":"10.1016/j.displa.2024.102885","url":null,"abstract":"<div><div>In the field of anomaly detection (AD), Few-Shot Anomaly Detection (FSAD) has gained significant attention in recent years. The goal of anomaly detection is to identify defects at the image level and localize them at the pixel level. Including defect data in training helps to improve model performance, and FSAD methods require a large amount of data to achieve better results. However, defect data is often difficult to obtain in experiments and applications. This paper proposes a more realistic method for simulating anomaly data, incorporating the synthesized anomaly data into the training process and applying it to the FSAD domain through multi-class mixed training. The anomaly data generated using our synthesis method closely resembles real anomalies. The corresponding anomaly synthesis method synthesizes the anomaly data from normal samples for the adaptively selected polygonal mesh region. We enhance the model’s ability to distinguish between positive and negative samples by incorporating synthesized anomaly data and normal data as triplets during training. This results in that more detailed features for normal samples will be noticed. During the testing phase, we obtain the feature distribution of normal images for a few unknown class normal samples to quickly adapt to the detection task of new categories. The effectiveness of the anomaly synthesis method was validated through experiments. Comparisons with advanced methods in the FSAD domain demonstrated that our method achieved competitive performance.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102885"},"PeriodicalIF":3.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D-MSFC: A 3D multi-scale features compression method for object detection 3D-MSFC:用于物体检测的三维多尺度特征压缩方法
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-17 DOI: 10.1016/j.displa.2024.102880
Zhengxin Li , Chongzhen Tian , Hui Yuan , Xin Lu , Hossein Malekmohamadi
{"title":"3D-MSFC: A 3D multi-scale features compression method for object detection","authors":"Zhengxin Li ,&nbsp;Chongzhen Tian ,&nbsp;Hui Yuan ,&nbsp;Xin Lu ,&nbsp;Hossein Malekmohamadi","doi":"10.1016/j.displa.2024.102880","DOIUrl":"10.1016/j.displa.2024.102880","url":null,"abstract":"<div><div>As machine vision tasks rapidly evolve, a new concept of compression, namely video coding for machines (VCM), has emerged. However, current VCM methods are only suitable for 2D machine vision tasks. With the popularization of autonomous driving, the demand for 3D machine vision tasks has significantly increased, leading to an explosive growth in LiDAR data that requires efficient transmission. To address this need, we propose a machine vision-based point cloud coding paradigm inspired by VCM. Specifically, we introduce a 3D multi-scale features compression (3D-MSFC) method, tailored for 3D object detection. Experimental results demonstrate that 3D-MSFC achieves less than a 3% degradation in object detection accuracy at a compression ratio of 2796<span><math><mo>×</mo></math></span>. Furthermore, its low-profile variant, 3D-MSFC-L, achieves less than a 2% degradation in accuracy at a compression ratio of 463<span><math><mo>×</mo></math></span>. The above results indicate that our proposed method can provide an ultra-high compression ratio while ensuring no significant drop in accuracy, greatly reducing the amount of data required for transmission during each detection. This can significantly lower bandwidth consumption and save substantial costs in application scenarios such as smart cities.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102880"},"PeriodicalIF":3.7,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the brain physiological activity and quantified assessment of VR cybersickness using EEG signals 利用脑电信号探索大脑生理活动并量化评估 VR 网络晕眩症
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-17 DOI: 10.1016/j.displa.2024.102879
Mutian Liu , Banghua Yang , Peng Zan , Luting Chen , Baozeng Wang , Xinxing Xia
{"title":"Exploring the brain physiological activity and quantified assessment of VR cybersickness using EEG signals","authors":"Mutian Liu ,&nbsp;Banghua Yang ,&nbsp;Peng Zan ,&nbsp;Luting Chen ,&nbsp;Baozeng Wang ,&nbsp;Xinxing Xia","doi":"10.1016/j.displa.2024.102879","DOIUrl":"10.1016/j.displa.2024.102879","url":null,"abstract":"<div><div>Cybersickness in virtual reality (VR) significantly impedes user experience enhancement. Sensory conflict theory explains cybersickness as arising from brain conflicts, making a brain physiology-based examination essential for cybersickness research. In this study, we analyze the impact of cybersickness on brain neural activity and achieve the quantified assessment of cybersickness using cybersickness-related electroencephalography (EEG) data. We conduct a cybersickness induction experiment by view rotation and simultaneously collect EEG signals from 36 subjects. We investigate both brain functional connectivity and neural oscillation power aiming to demonstrate the specific variation trends of brain physiological characteristics across varying degrees of cybersickness. Filtering raw EEG highlights cybersickness-related features, facilitating the quantified assessment of cybersickness through a Convolutional Temporal-Transformer Network, named CTTNet. The results demonstrate that cybersickness leads to a significant reduction in the power of the beta and gamma frequency bands in the frontal lobe, accompanied by weakened internal connectivity within these bands. Conversely, as the severity of cybersickness increases, connectivity between the posterior brain regions and the frontal lobe in the mid-to-high frequency bands is enhanced. CTTNet achieves accurate evaluation of cybersickness by effectively capturing temporal-spatial EEG features and the long-term temporal dependencies of cybersickness. A significant and robust relationship between cybersickness and cerebral physiological characteristics is demonstrated. These findings hold the potential to offer valuable insights for the future real-time assessment and mitigation of cybersickness, particularly focusing on brain dynamics.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102879"},"PeriodicalIF":3.7,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR-empowered interior design: Enhancing efficiency and quality through immersive experiences VR 助力室内设计:通过身临其境的体验提高效率和质量
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102887
Pengjun Wu , Yao Liu , Huijie Chen , Xiaowen Li , Hui Wang
{"title":"VR-empowered interior design: Enhancing efficiency and quality through immersive experiences","authors":"Pengjun Wu ,&nbsp;Yao Liu ,&nbsp;Huijie Chen ,&nbsp;Xiaowen Li ,&nbsp;Hui Wang","doi":"10.1016/j.displa.2024.102887","DOIUrl":"10.1016/j.displa.2024.102887","url":null,"abstract":"<div><div>Traditional interior design methods often struggle to meet the increasingly diverse needs of modern users. As a result, virtual reality (VR) technology has emerged as an innovative solution. This study explores the application of VR technology in interior design (ID), with a focus on enhancing efficiency and quality in the design process. To achieve this, we developed an ID platform based on VR technology and employed Level of Detail (LOD) techniques for 3D model simplification, significantly improving rendering efficiency and effectively reducing rendering times. Performance testing results indicate that the average rendering times decreased from 721.50 ms, 811.20 ms, and 748.10 ms to 335.80 ms, 431.10 ms, and 371.30 ms, respectively, while the frame rates (FPS) increased from 100.80, 85.00, and 93.40 to 107.50, 91.00, and 100.20. These experimental results demonstrate that VR-based design not only accelerates the design process but also significantly enhances the visual quality of interior spaces, providing users with a more immersive experience.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102887"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPFont: Stroke potential features embedded GAN for Chinese calligraphy font generation SPFont:用于生成中国书法字体的嵌入笔画潜力特征的 GAN
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102876
Fangmei Chen, Chen Wang, Xingchen Yao, Fuming Sun
{"title":"SPFont: Stroke potential features embedded GAN for Chinese calligraphy font generation","authors":"Fangmei Chen,&nbsp;Chen Wang,&nbsp;Xingchen Yao,&nbsp;Fuming Sun","doi":"10.1016/j.displa.2024.102876","DOIUrl":"10.1016/j.displa.2024.102876","url":null,"abstract":"<div><div>Chinese calligraphy font generation is an extremely challenging problem. Firstly, Chinese calligraphy fonts have complex structures. The accuracy and artistic quality of the generated fonts will be affected by the order and layout of the strokes as well as the relationships between them. Secondly, the number of Chinese characters is large, but existing calligraphy works are scarce. Hence, it is difficult to establish a comprehensive and high-quality Chinese calligraphy dataset. In this paper, we propose an unsupervised calligraphy font generation network SPFont. It is based on a generative adversarial network (GAN) framework. The generator includes a style feature encoder, a content feature encoder, a stroke potential feature fusion module (SPFM) and a decoder. The SPFM module, by overlaying lower-level style and content features, better preserves fine details of the font such as stroke thickness, curve shapes and other characteristics. The SPFM module and the extracted style features are fused and then fed into the decoder, allowing it to consider the influence of style, content and stroke potential simultaneously during the generation process. Experimental results demonstrate that our model generates Chinese calligraphy fonts with higher quality compared to previous methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102876"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HHGraphSum: Hierarchical heterogeneous graph learning for extractive document summarization HHGraphSum:用于提取文档摘要的分层异构图学习
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102884
Pengyi Hao , Cunqi Wu , Cong Bai
{"title":"HHGraphSum: Hierarchical heterogeneous graph learning for extractive document summarization","authors":"Pengyi Hao ,&nbsp;Cunqi Wu ,&nbsp;Cong Bai","doi":"10.1016/j.displa.2024.102884","DOIUrl":"10.1016/j.displa.2024.102884","url":null,"abstract":"<div><div>Extractive summarization aims to select important sentences from the document to generate a summary. However, current extractive document summarization methods fail to fully consider the semantic information among sentences and the various relations in the entire document. Therefore, a novel end-to-end framework named hierarchical heterogeneous graph learning for document summarization (HHGraphSum) is proposed in this paper. In this framework, a hierarchical heterogeneous graph is constructed for the whole document, where the representation of sentences is learnt by several levels of graph neural network. The combination of single-direction message passing and bidirectional message passing helps graph learning obtain effective relations among sentences and words. For capturing the rich semantic information, space–time collaborative learning is designed to generate the primary features of sentences which are enhanced in graph learning. For generating a less redundant and more precise summary, a LSTM based predictor and a blocking strategy are explored. Evaluations both on a single-document dataset and a multi-document dataset demonstrate the effectiveness of the HHGraphSum. The code of HHGraphSum is available on Github:<span><span>https://github.com/Devin100086/HHGraphSum</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102884"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning domain-adaptive palmprint anti-spoofing feature from multi-source domains 从多源域学习域自适应掌纹防欺骗特征
IF 3.7 2区 工程技术
Displays Pub Date : 2024-11-16 DOI: 10.1016/j.displa.2024.102871
Chengcheng Liu , Huikai Shao , Dexing Zhong
{"title":"Learning domain-adaptive palmprint anti-spoofing feature from multi-source domains","authors":"Chengcheng Liu ,&nbsp;Huikai Shao ,&nbsp;Dexing Zhong","doi":"10.1016/j.displa.2024.102871","DOIUrl":"10.1016/j.displa.2024.102871","url":null,"abstract":"<div><div>Palmprint anti-spoofing is essential for securing palmprint recognition systems. Although some anti-spoofing methods excel on closed datasets, their ability to generalize across unknown domains is often limited. This paper introduces the Domain-Adaptive Palmprint Anti-Spoofing Network (DAPANet), which leverages multiple known spoofing domains to extract domain-invariant spoofing clues from unlabeled domains. DAPANet tackles the domain adaptation challenge using three strategies: global domain alignment, subdomain alignment, and the separation of distinct subdomains. The framework consists of a public feature extraction module, a domain adaptation module, a domain classifier, and a fusion classifier. Initially, the public feature extraction module extracts palmprint features. Subsequently, the domain adaptation module aligns target domain features with source domain features to generate domain-specific outputs. The domain classifier provides initial classifiable features, which are then integrated by DAPANet, employing a unified fusion classifier for decision-making. Comprehensive experiments conducted on XJTU-PalmReplay database across various cross-domain scenarios confirm the efficacy of the proposed method.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102871"},"PeriodicalIF":3.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142706220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信