Displays最新文献

筛选
英文 中文
DRIGNet: Low-light image enhancement based on dual-range information guidance DRIGNet:基于双距离信息制导的微光图像增强
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-26 DOI: 10.1016/j.displa.2025.103163
Feng Huang, Jiong Huang, Jing Wu, Jianhua Lin, Jing Guo, Yunxiang Li, Zhewei Liu
{"title":"DRIGNet: Low-light image enhancement based on dual-range information guidance","authors":"Feng Huang,&nbsp;Jiong Huang,&nbsp;Jing Wu,&nbsp;Jianhua Lin,&nbsp;Jing Guo,&nbsp;Yunxiang Li,&nbsp;Zhewei Liu","doi":"10.1016/j.displa.2025.103163","DOIUrl":"10.1016/j.displa.2025.103163","url":null,"abstract":"<div><div>The task of low-light image enhancement aims to reconstruct details and visual information from degraded low-light images. However, existing deep learning methods for feature processing usually lack feature differentiation or fail to implement reasonable differentiation handling, which can limit the quality of the enhanced images, leading to issues like color distortion and blurred details. To address these limitations, we propose Dual-Range Information Guidance Network (DRIGNet). Specifically, we develop an efficient U-shaped architecture Dual-Range Information Guided Framework (DGF). DGF decouples traditional image features into dual-range information while integrating stage-specific feature properties with the proposed dual-range information. We design the Global Dynamic Enhancement Module (GDEM) using channel interaction and the Detail Focus Module (DFM) with three-directional filter, both embedded in DGF to model long-range and short-range features respectively. Additionally, we introduce a feature fusion strategy, Attention-Guided Fusion Module (AGFM), which merges dual-range information, facilitating complementary enhancement. In the encoder, DRIGNet extracts coherent long-range information and enhances the global structure of the image; in the decoder, DRIGNet captures short-range information and fuse dual-rage information to restore detailed areas. Finally, we conduct extensive quantitative and qualitative experiments to demonstrate that the proposed DRIGNet outperforms the current State-of-the-Art (SOTA) methods across ten datasets.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103163"},"PeriodicalIF":3.7,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144714096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A halftone image quality assessment method based on gradient and texture 基于梯度和纹理的半色调图像质量评价方法
IF 3.4 2区 工程技术
Displays Pub Date : 2025-07-26 DOI: 10.1016/j.displa.2025.103165
Xinhong Zhang , Jiayin Zhao , Fan Zhang
{"title":"A halftone image quality assessment method based on gradient and texture","authors":"Xinhong Zhang ,&nbsp;Jiayin Zhao ,&nbsp;Fan Zhang","doi":"10.1016/j.displa.2025.103165","DOIUrl":"10.1016/j.displa.2025.103165","url":null,"abstract":"<div><div>Digital halftoning is an important screening technique in the digital printing and publishing. However, traditional image quality assessment (IQA) methods are not fully applicable to the quality assessment of halftone images. This paper proposes a gradient and texture-based quality assessment method for halftone images, PGT-SSIM (Partitioned Gradient and Texture Structural Similarity). The proposed method builds on the partitioning concept, extracts the gradient feature map of the image, incorporates texture feature differences, and finally applies the SSIM formula for weighted scoring to derive the final quality score. Experimental results demonstrate that the proposed method achieves higher accuracy and better alignment with human subjective perception compared to existing approaches. The indexes of PGT-SSIM are significantly higher than SSIM. The PLCC index of PGT-SSIM is 11% higher than that of Partition SSIM. Furthermore, the proposed halftone IQA method provides valuable insights for improving halftone algorithms, making it a significant contribution to the field.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103165"},"PeriodicalIF":3.4,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144722587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Omnidirectional image quality assessment with gated dual-projection fusion 基于门控双投影融合的全向图像质量评估
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-24 DOI: 10.1016/j.displa.2025.103173
ChengZhi Xiao, RuiKang Yu
{"title":"Omnidirectional image quality assessment with gated dual-projection fusion","authors":"ChengZhi Xiao,&nbsp;RuiKang Yu","doi":"10.1016/j.displa.2025.103173","DOIUrl":"10.1016/j.displa.2025.103173","url":null,"abstract":"<div><div>Existing omnidirectional image quality assessment (OIQA) models typically rely on the equirectangular projection (ERP) or cubemap projection (CMP) of omnidirectional images as inputs. However, the deformation in ERP and the discontinuities at the boundaries of CMP limit the network’s ability to represent image information, leading to information loss. Therefore, it is necessary to fuse these two projections of omnidirectional images to achieve comprehensive feature representation. Current OIQA models only integrate and interact high-level features extracted from different projection formats at the last stage of the network, overlooking potential information loss at each stage within the network. To this end, we consider the respective strengths and weaknesses of the two projections, and design a feature extraction and fusion module at each stage of the network to enhance the model’s representation capability. Specifically, the ERP features are first decomposed into two projection formats before being fed into each feature extraction stage of the network for separate processing. Subsequently, we introduce the gating mechanism and develop a Gated Dual-Projection Fusion module (GDPF) to interactively fuse the features computed from both the ERP and CMP projection formats. GDPF allows the developed model to enhance critical information while filtering out deformation and discontinuous information. The fused features are then input into the next stage, where the aforementioned operations are repeated. This process alleviates the issues of feature representation caused by deformation in ERP and discontinuities in CMP and the fused features are used for quality prediction. Experiments on three public datasets demonstrate the superior prediction accuracy of the proposed model.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103173"},"PeriodicalIF":3.7,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A joint learning framework for fake news detection 假新闻检测联合学习框架
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-23 DOI: 10.1016/j.displa.2025.103154
Muhammad Abdullah , Zan Hongying , Arifa Javed , Orken Mamyrbayev , Fabio Caraffini , Hassan Eshkiki
{"title":"A joint learning framework for fake news detection","authors":"Muhammad Abdullah ,&nbsp;Zan Hongying ,&nbsp;Arifa Javed ,&nbsp;Orken Mamyrbayev ,&nbsp;Fabio Caraffini ,&nbsp;Hassan Eshkiki","doi":"10.1016/j.displa.2025.103154","DOIUrl":"10.1016/j.displa.2025.103154","url":null,"abstract":"<div><div>This paper presents a joint learning framework for fake news detection, introducing an Enhanced BERT model that integrates named entity recognition, relational feature classification, and Stance Detection through a unified multi-task approach. The model incorporates task-specific masking and hierarchical attention mechanisms to capture both fine-grained and high-level contextual relationships across headlines and body text. Cross-task consistency losses are applied to ensure coherence and alignment with external factual knowledge. We analyse the average distance from components to the centroid of a news sample to differentiate genuine information from falsehoods in large-scale text data effectively. Experiments on two FakeNewsNet datasets show that our framework outperforms state-of-the-art models, with accuracy improvements of 2.17% and 1.03%. These results indicate the potential for applications needing detailed text processing, like automatic summarisation and misinformation detection.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103154"},"PeriodicalIF":3.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LR-Inst: A lightweight and robust instance segmentation network for apple detection in complex orchard environments 一个轻量级和鲁棒的实例分割网络,用于复杂果园环境中的苹果检测
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-21 DOI: 10.1016/j.displa.2025.103156
Hengrong Guo , Hao Wan , Xilei Zeng, Han Zhang, Zeming Fan
{"title":"LR-Inst: A lightweight and robust instance segmentation network for apple detection in complex orchard environments","authors":"Hengrong Guo ,&nbsp;Hao Wan ,&nbsp;Xilei Zeng,&nbsp;Han Zhang,&nbsp;Zeming Fan","doi":"10.1016/j.displa.2025.103156","DOIUrl":"10.1016/j.displa.2025.103156","url":null,"abstract":"<div><div>Apple instance segmentation is a critical task in the implementation of automated harvesting systems. Despite significant advances in instance segmentation, current methods remain impractical for deployment due to their architectural complexity and slow inference speeds. While lightweight models have been introduced to improve efficiency, their performance degrades in orchard environments under occlusion, fruit overlap, and varying lighting conditions. To address these challenges, we present LR-Inst, a lightweight and robust instance segmentation network. First, we design an innovative cross-level feature fusion architecture that exploits the rich spatial details and semantic information present in intermediate-layer features. Then, a set of efficient modules is designed to further boost feature representation, including the Spatial-Semantic Feature Fusion Module (SSFM), the Dynamic Spatial-Semantic Fusion Module (DSSFM), the Feature Aggregation and Shuffle Module (FASM), and the Channel-Spatial Attention Module (CSAM). Experimental results demonstrate that LR-Inst contains only 3.742<!--> <!-->M parameters and requires 8.581<!--> <!-->G FLOPs. When evaluated on our self-collected orchard dataset, LR-Inst achieves a detection average precision (AP) of 0.946 and a segmentation AP of 0.944, outperforming several state-of-the-art (SOTA) models.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103156"},"PeriodicalIF":3.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144696677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The application of VR in interior design education to enhance design effectiveness and student experience VR在室内设计教育中的应用,提高设计效果和学生体验
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-21 DOI: 10.1016/j.displa.2025.103161
Pengjun Wu , Wencui Zhang , Peiyuan Li , Yao Liu
{"title":"The application of VR in interior design education to enhance design effectiveness and student experience","authors":"Pengjun Wu ,&nbsp;Wencui Zhang ,&nbsp;Peiyuan Li ,&nbsp;Yao Liu","doi":"10.1016/j.displa.2025.103161","DOIUrl":"10.1016/j.displa.2025.103161","url":null,"abstract":"<div><div>As Interior Design Education (IDE) evolves to meet increasingly complex and diverse demands, traditional teaching methods face limitations in areas such as design presentation, teacher–student interaction, and spatial perception, often leading to reduced learning effectiveness. Virtual Reality (VR), with its immersive and interactive features, offers promising solutions to these challenges. This study developed a VR-based interior design education platform incorporating Level of Detail (LOD) technology to improve instructional precision and learning outcomes. To systematically evaluate the teaching effectiveness, the study employed evaluation indicators grounded in the Technology Acceptance Model (TAM), emphasizing perceived usefulness and perceived ease of use as key dimensions influencing learners’ acceptance of VR technology. Specifically, content comprehensiveness, visual clarity, and spatial understanding were selected as core evaluation metrics reflecting these TAM constructs. An experimental comparison with traditional teaching methods assessed these dimensions. Results showed the VR-based approach significantly outperformed traditional methods, with higher average scores in comprehensiveness (90.68 ± 4.00 vs. 82.35 ± 2.20), visibility (91.08 ± 4.11 vs. 83.66 ± 3.85), and spatial effects (92.98 ± 3.22 vs. 85.64 ± 3.96). These findings highlight the advantages of LOD-enhanced VR teaching in improving clarity and interaction efficiency. Focus group interviews further confirmed its effectiveness in enhancing students’ understanding and communication.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103161"},"PeriodicalIF":3.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Orthogonal translation computed laminography reconstruction based on self-prior information and adaptive weighted total variation 基于自先验信息和自适应加权总变差的正交平移计算层析重建
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-20 DOI: 10.1016/j.displa.2025.103169
Chuandong Tan , Chao Long , Yarui Xi , Zhiting Chen , Xinxin Lin , Fenglin Liu , Yufang Cai , Liming Duan
{"title":"Orthogonal translation computed laminography reconstruction based on self-prior information and adaptive weighted total variation","authors":"Chuandong Tan ,&nbsp;Chao Long ,&nbsp;Yarui Xi ,&nbsp;Zhiting Chen ,&nbsp;Xinxin Lin ,&nbsp;Fenglin Liu ,&nbsp;Yufang Cai ,&nbsp;Liming Duan","doi":"10.1016/j.displa.2025.103169","DOIUrl":"10.1016/j.displa.2025.103169","url":null,"abstract":"<div><div>Orthogonal translation computed laminography (OTCL) provides an effective non-destructive testing method for plate-like objects. Nevertheless, OTCL images suffer from aliasing artifacts due to the inherent incompleteness of projection data, negatively impacting flaw characterization, dimensional metrology, and failure analysis. To reveal the cause of aliasing artifacts, the three-dimensional frequency domain characteristics of OTCL are analyzed. We further propose a novel reconstruction algorithm to mitigate aliasing artifacts, termed self-prior information guidance and adaptive weight total variation constraint (SPIG-AwTV). The SPIG-AwTV comprises two components: a self-prior information guidance (SPIG) regularization term and an adaptive weighted total variation (AwTV) regularization term. Specifically, SPIG is derived from filtered backprojection reconstruction result via contour extraction and masking. The AwTV regularization term is tailored to the gradient features of OTCL images in different directions. Experimental results demonstrate that the SPIG-AwTV outperforms existing methods in suppressing aliasing artifacts, preserving edges, and achieving higher-quality OTCL images.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103169"},"PeriodicalIF":3.7,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFF-Mono:A lightweight self-supervised monocular depth estimation method based on dual-branch feature fusion DFF-Mono:一种基于双分支特征融合的轻量级自监督单目深度估计方法
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-20 DOI: 10.1016/j.displa.2025.103167
Han Zhang , Xiaojun Yu , Hengrong Guo , Liang Shen , Zeming Fan
{"title":"DFF-Mono:A lightweight self-supervised monocular depth estimation method based on dual-branch feature fusion","authors":"Han Zhang ,&nbsp;Xiaojun Yu ,&nbsp;Hengrong Guo ,&nbsp;Liang Shen ,&nbsp;Zeming Fan","doi":"10.1016/j.displa.2025.103167","DOIUrl":"10.1016/j.displa.2025.103167","url":null,"abstract":"<div><div>Monocular depth estimation is one of the fundamental challenges in 3D scene understanding, particularly when operating within the constraints of unsupervised learning paradigms. While existing self-supervised methods avoid the dependency on annotated depth labels, their high computational complexity significantly hinders deployment on resource-constrained mobile platforms. To address this issue, we propose a parameter-efficient framework, namely, DFF-Mono, that synergistically optimizes depth estimation accuracy with computational efficiency. Specifically, the proposed DFF-Mono framework incorporates three main components. While a lightweight encoder that integrates Dual-Kernel Dilated Convolution (DKDC) modules with Dual-branch Feature Fusion (DFF) architecture is proposed for multi-scale feature encoding, a novel Attention-guided Large Kernel Inception (ALKI) module with multi-branch large-kernel convolution is devised to leverage local–global attention guidance for efficient local feature extraction. As a complement, a frequency-domain optimization strategy is also employed to enhance training efficiency. The strategy is achieved via adaptive Gaussian low-pass filtering, without introducing any additional network parameters. Extensive experiments are conducted to verify the effectiveness of the proposed method, and results demonstrate that DFF-Mono is superior over those existing approaches across standard benchmarks. Notably, DFF-Mono reduces model parameters by 23% compared to current state-of-the-art solutions while consistently achieving superior depth accuracy.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103167"},"PeriodicalIF":3.7,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using deep learning predictions to study the development of drawing behaviour in children 使用深度学习预测来研究儿童绘画行为的发展
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-20 DOI: 10.1016/j.displa.2025.103166
Benjamin Beltzung , Marie Pelé , Lison Martinet , Elliot Maitre , Jimmy Falck , Cédric Sueur
{"title":"Using deep learning predictions to study the development of drawing behaviour in children","authors":"Benjamin Beltzung ,&nbsp;Marie Pelé ,&nbsp;Lison Martinet ,&nbsp;Elliot Maitre ,&nbsp;Jimmy Falck ,&nbsp;Cédric Sueur","doi":"10.1016/j.displa.2025.103166","DOIUrl":"10.1016/j.displa.2025.103166","url":null,"abstract":"<div><div>Drawing behaviour in children provides a unique window into their cognitive development. This study uses Convolutional Neural Networks (CNNs) to examine cognitive development in children’s drawing behaviour by analysing 386 drawings from 193 participants, comprising 150 children aged 2–10 years and 43 adults from France. CNN models, enhanced by Bayesian optimization, were trained to categorize drawings into ten age groups and to compare children’s drawings with adults’ ones. Results showed that model accuracy increases with the child’s age, reflecting improvement in drawing skills. Techniques like Grad-CAM and Captum offered insights into key features recognized by CNNs, illustrating the potential of deep learning in evaluating developmental milestones, with significant implications for educational psychology and developmental diagnostics.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103166"},"PeriodicalIF":3.7,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view stereo with cross-scale feature fusion strategy and hybrid depth estimation 基于跨尺度特征融合和混合深度估计的多视角立体视觉
IF 3.7 2区 工程技术
Displays Pub Date : 2025-07-19 DOI: 10.1016/j.displa.2025.103128
Yunxin Ye, Feng Shao, Hangwei Chen, Xiongli Chai, Xiaolong Tang
{"title":"Multi-view stereo with cross-scale feature fusion strategy and hybrid depth estimation","authors":"Yunxin Ye,&nbsp;Feng Shao,&nbsp;Hangwei Chen,&nbsp;Xiongli Chai,&nbsp;Xiaolong Tang","doi":"10.1016/j.displa.2025.103128","DOIUrl":"10.1016/j.displa.2025.103128","url":null,"abstract":"<div><div>In multi-view stereo (MVS) 3D reconstruction, existing methods often face challenges such as insufficient feature representation in weakly textured areas, assumptions of equal view contributions, and limited depth estimation accuracy, leading to incomplete reconstruction results. To address these issues, we propose a multi-view stereo method integrating a cross-scale feature fusion strategy and hybrid depth estimation (CH-MVSNet), aimed at improving the precision and completeness of MVS reconstruction. Our approach introduces a multi-scale feature enhancement module (MFEM), which combines channel attention mechanisms with multi-scale feature fusion to enhance features from source and reference images, improving intra-image contextual information and inter-image feature relationships. We then propose a weighted view cost volume module (WVCM), which calculates weighted view correlations to construct a more precise cost volume, further improving reconstruction accuracy. Finally, we incorporate an RGB-guided hybrid depth estimation module (RHDE), which combines classification and regression methods for depth estimation, utilizing RGB information from reference images to optimize the depth map precision. Through rigorous testing on the DTU dataset and Tanks and Temples benchmark, our method demonstrates significant improvements in reconstruction accuracy and completeness.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"90 ","pages":"Article 103128"},"PeriodicalIF":3.7,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信