DisplaysPub Date : 2024-10-24DOI: 10.1016/j.displa.2024.102866
Mengfan Lv, Xiwu Shang, Jiajia Wang, Guoping Li, Guozhong Wang
{"title":"DCMR: Degradation compensation and multi-dimensional reconstruction based pre-processing for video coding","authors":"Mengfan Lv, Xiwu Shang, Jiajia Wang, Guoping Li, Guozhong Wang","doi":"10.1016/j.displa.2024.102866","DOIUrl":"10.1016/j.displa.2024.102866","url":null,"abstract":"<div><div>The rapid growth of video data poses a serious challenge to the limited bandwidth. Video coding pre-processing technology can remove coding noise without changing the architecture of the codec. Therefore, it can improve the coding efficiency while ensuring a high degree of compatibility with existing codec. However, the existing pre-processing methods have the problem of feature redundancy, and lack an effective mechanism to recover high-frequency details. In view of these problems, we propose a Degradation Compensation and Multi-dimensional Reconstruction (DCMR) pre-processing method for video coding to improve compression efficiency. Firstly, we develop a degradation compensation model, which aims at filtering the coding noise in the original video and relieving the frame quality degradation caused by transmission. Secondly, we construct a lightweight multi-dimensional feature reconstruction network, which combines residual learning and feature distillation. It aims to enhance and refine the key features related to coding from both spatial and channel dimensions while suppressing irrelevant features. In addition, we design a weighted guided image filter detail enhancement convolution module, which is specifically used to recover the high-frequency details lost in the denoising process. Finally, we introduce an adaptive discrete cosine transform loss to balance coding efficiency and quality. Experimental results demonstrate that compared with the original codec H.266/VVC, the proposed DCMR can achieve BD-rate (VMAF) and BD-rate (MOS) gains by 21.62% and 12.99% respectively on VVC, UVG, and MCL-JCV datasets.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102866"},"PeriodicalIF":3.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-23DOI: 10.1016/j.displa.2024.102863
Jiale Chen, Qiusheng Lian, Baoshun Shi
{"title":"BGFlow: Brightness-guided normalizing flow for low-light image enhancement","authors":"Jiale Chen, Qiusheng Lian, Baoshun Shi","doi":"10.1016/j.displa.2024.102863","DOIUrl":"10.1016/j.displa.2024.102863","url":null,"abstract":"<div><div>Low-light image enhancement poses significant challenges due to its ill-posed nature. Recently, deep learning-based methods have attempted to establish a unified mapping relationship between normal-light images and their low-light versions but frequently struggle to capture the intricate variations in brightness conditions. As a result, these methods often suffer from overexposure, underexposure, amplified noise, and distorted colors. To tackle these issues, we propose a brightness-guided normalizing flow framework, dubbed BGFlow, for low-light image enhancement. Specifically, we recognize that low-frequency sub-bands in the wavelet domain carry significant brightness information. To effectively capture the intricate variations in brightness within an image, we design a transformer-based multi-scale wavelet-domain encoder to extract brightness information from the multi-scale features of the low-frequency sub-bands. The extracted brightness feature maps, at different scales, are then injected into the brightness-guided affine coupling layer to guide the training of the conditional normalizing flow module. Extensive experimental evaluations demonstrate the superiority of BGFlow over existing deep learning-based approaches in both qualitative and quantitative assessments. Moreover, we also showcase the exceptional performance of BGFlow on the underwater image enhancement task.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102863"},"PeriodicalIF":3.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-19DOI: 10.1016/j.displa.2024.102861
Sanghyeon Kim, Uijong Ju
{"title":"Dynamic assessment of visual fatigue during video watching: Validation of dynamic rating based on post-task ratings and video features","authors":"Sanghyeon Kim, Uijong Ju","doi":"10.1016/j.displa.2024.102861","DOIUrl":"10.1016/j.displa.2024.102861","url":null,"abstract":"<div><div>People watching video displays for long durations experience visual fatigue and other symptoms associated with visual discomfort. Fatigue-reduction techniques are often applied but may potentially degrade the immersive experience. To appropriately adjust fatigue-reduction techniques, the changes in visual fatigue over time should be analyzed which is crucial for the appropriate adjustment of fatigue-reduction techniques. However, conventional methods used for assessing visual fatigue are inadequate because they rely entirely on post-task surveys, which cannot easily determine dynamic changes. This study employed a dynamic assessment method for evaluating visual fatigue in real-time. Using a joystick, participants continuously evaluated subjective fatigue whenever they perceived changes. A Simulator Sickness Questionnaire (SSQ) validated the results, which indicated significant correlations between dynamic assessments and the SSQ across five items associated with symptoms associated with visual discomfort. Furthermore, we explored the potential relationship between dynamic visual fatigue and objective video features, e.g., optical flow and the V-values of the hue/saturation value (HSV) color space, which represent the motion and brightness of the video. The results revealed that dynamic visual fatigue significantly correlated with both the optical flow and the V-value. Moreover, based on machine learning models, we determined that the changes in visual fatigue can be predicted based on the optical flow and V-value. Overall, the results validate that dynamic assessment methods can form a reliable baseline for real-time prediction of visual fatigue.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102861"},"PeriodicalIF":3.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-19DOI: 10.1016/j.displa.2024.102860
Ho Sub Lee , Sung In Cho
{"title":"Multi-scale attention in attention neural network for single image deblurring","authors":"Ho Sub Lee , Sung In Cho","doi":"10.1016/j.displa.2024.102860","DOIUrl":"10.1016/j.displa.2024.102860","url":null,"abstract":"<div><div>Image deblurring, which eliminates blurring artifacts to recover details from a given input image, represents an important task for the computer vision field. Recently, the attention mechanism with deep neural networks (DNN) demonstrates promising performance of image deblurring. However, they have difficulty learning complex blurry and sharp relationships through a balance of spatial detail and high-level contextualized information. Moreover, most existing attention-based DNN methods fail to selectively exploit the information from attention and non-attention branches. To address these challenges, we propose a new approach called Multi-Scale Attention in Attention (MSAiA) for image deblurring. MSAiA incorporates dynamic weight generation by leveraging the joint dependencies of channel and spatial information, allowing for adaptive changes to the weight values in attention and non-attention branches. In contrast to existing attention mechanisms that primarily consider channel or spatial dependencies and do not adequately utilize the information from attention and non-attention branches, our proposed AiA design combines channel-spatial attention. This attention mechanism effectively utilizes the dependencies between channel-spatial information to allocate weight values for attention and non-attention branches, enabling the full utilization of information from both branches. Consequently, the attention branch can more effectively incorporate useful information, while the non-attention branch avoids less useful information. Additionally, we employ a novel multi-scale neural network that aims to learn the relationships between blurring artifacts and the original sharp image by further exploiting multi-scale information. The experimental results prove that the proposed MSAiA achieves superior deblurring performance compared with the state-of-the-art methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102860"},"PeriodicalIF":3.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-19DOI: 10.1016/j.displa.2024.102857
Yuan Zhang , Zixi Wang , Xiaodi Guan , Lijun He , Fan Li
{"title":"Private compression for intermediate feature in IoT-supported mobile cloud inference","authors":"Yuan Zhang , Zixi Wang , Xiaodi Guan , Lijun He , Fan Li","doi":"10.1016/j.displa.2024.102857","DOIUrl":"10.1016/j.displa.2024.102857","url":null,"abstract":"<div><div>In the emerging Internet of Things (IoT) paradigm, mobile cloud inference serves as an efficient application framework that relieves the computation and storage burden on resource-constrained mobile devices by offloading the workload to cloud servers. However, mobile cloud inference encounters computation, communication, and privacy challenges to ensure efficient system inference and protect the privacy of mobile users’ collected information. To address the deployment of deep neural networks (DNN) with large capacity, we propose splitting computing (SC) where the entire model is divided into two parts, to be executed on mobile and cloud ends respectively. However, the transmission of intermediate data poses a bottleneck to system performance. This paper initially demonstrates the privacy issue arising from the machine analysis-oriented intermediate feature. We conduct a preliminary experiment to intuitively reveal the latent potential for enhancing the privacy-preserving ability of the initial feature. Motivated by this, we propose a framework for privacy-preserving intermediate feature compression, which addresses the limitations in both compression and privacy that arise in the original extracted feature data. Specifically, we propose a method that jointly enhances privacy and encoding efficiency, achieved through the collaboration of the encoding feature privacy enhancement module and the privacy feature ordering enhancement module. Additionally, we develop a gradient-reversal optimization strategy based on information theory to ensure the utmost concealment of core privacy information throughout the entire codec process. We evaluate the proposed method on two DNN models using two datasets, demonstrating its ability to achieve superior analysis accuracy and higher privacy preservation than HEVC. Furthermore, we provide an application case of a wireless sensor network to validate the effectiveness of the proposed method in a real-world scenario.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102857"},"PeriodicalIF":3.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-19DOI: 10.1016/j.displa.2024.102864
Linlin Wang, Yixuan Zou, Haiyan Wang, Chengqi Xue
{"title":"Icon similarity model based on cognition and deep learning","authors":"Linlin Wang, Yixuan Zou, Haiyan Wang, Chengqi Xue","doi":"10.1016/j.displa.2024.102864","DOIUrl":"10.1016/j.displa.2024.102864","url":null,"abstract":"<div><div>Human-computer cooperation guided by natural interaction, intelligent interaction, and human–computer integration is gradually becoming a new trend in human–computer interfaces. An icon is an indispensable pictographic symbol in an interface that can convey pivotal semantics between humans and computers. Research on similar icons’ cognition in humans and the discrimination of computers can reduce misunderstandings and facilitate transparent cooperation. Therefore, this research focuses on images of icons, extracted contours, and four features, including the curvature, proportion, orientation, and line of the contour, step by step. By manipulating the feature value change to obtain 360 similar icons, a cognitive experiment was conducted with 25 participants to explore the boundary values of the feature dimensions that cause different levels of similarity. Its boundary values were applied to deep learning to train a discrimination algorithm model that included 1500 similar icons. This dataset was used to train a Siamese neural network using a 16-layer network branch of a visual geometry group. The training process used stochastic gradient descent. This method of combining human cognition and deep learning technology is meaningful for establishing a consensus on icon semantics, including content and emotions, by outputting similarity levels and values. Taking icon similarity discrimination as an example, this study explored the analysis and simulation methods of computer vision for human visual cognition. The accuracy evaluated is 90.82%. The precision was evaluated as 90% for high, 80.65% for medium, and 97.30% for low. Recall was evaluated as 100% for high, 89.29% for medium, and 83.72% for low. It has been verified that it can compensate for fuzzy cognition in humans and enable computers to cooperate efficiently.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102864"},"PeriodicalIF":3.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-19DOI: 10.1016/j.displa.2024.102867
Wuchao Li , Tongyin Yang , Pinhao Li , Xinfeng Liu , Shasha Zhang , Jianguo Zhu , Yuanyuan Pei , Yan Zhang , Tijiang Zhang , Rongpin Wang
{"title":"Multicenter evaluation of CT deep radiomics model in predicting Leibovich score risk groups for non-metastatic clear cell renal cell carcinoma","authors":"Wuchao Li , Tongyin Yang , Pinhao Li , Xinfeng Liu , Shasha Zhang , Jianguo Zhu , Yuanyuan Pei , Yan Zhang , Tijiang Zhang , Rongpin Wang","doi":"10.1016/j.displa.2024.102867","DOIUrl":"10.1016/j.displa.2024.102867","url":null,"abstract":"<div><h3>Background</h3><div>Non-metastatic clear cell renal cell carcinoma (nccRCC) poses a significant risk of postoperative recurrence and metastasis, underscoring the importance of accurate preoperative risk assessment. While the Leibovich score is effective, it relies on postoperative histopathological data. This study aims to evaluate the efficacy of CT radiomics and deep learning models in predicting Leibovich score risk groups in nccRCC, and to explore the interrelationship between CT and pathological features.</div></div><div><h3>Patients and Methods</h3><div>This research analyzed 600 nccRCC patients from four datasets, dividing them into low (Leibovich scores of 0–2) and intermediate to high risk (Leibovich scores exceeding 3) groups. Radiological model was developed from CT subjective features, and radiomics and deep learning models were constructed from CT images. Additionally, a deep radiomics model using radiomics and deep learning features was developed, alongside a fusion model incorporating all feature types. Model performance was assessed by AUC values, while survival differences across predicted groups were analyzed using survival curves and the log-rank test. Moreover, the research investigated the interrelationship between CT and pathological features derived from whole-slide pathological images.</div></div><div><h3>Results</h3><div>Within the training dataset, four radiological, three radiomics, and thirteen deep learning features were selected to develop models predicting nccRCC Leibovich score risk groups. The deep radiomics model demonstrated superior predictive accuracy, evidenced by AUC values of 0.881, 0.829, and 0.819 in external validation datasets. Notably, significant differences in overall survival were observed among patients classified by this model (log-rank test p < 0.05 across all datasets). Furthermore, a correlation and complementarity were observed between CT deep radiomics features and pathological deep learning features.</div></div><div><h3>Conclusions</h3><div>The CT deep radiomics model precisely predicts nccRCC Leibovich score risk groups preoperatively and highlights the synergistic effect between CT and pathological data.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102867"},"PeriodicalIF":3.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-18DOI: 10.1016/j.displa.2024.102856
Li Wang , Cong Shi , Shrinivas Pundlik , Xu Yang , Liyuan Liu , Gang Luo
{"title":"LLD-GAN: An end-to-end network for low-light image demosaicking","authors":"Li Wang , Cong Shi , Shrinivas Pundlik , Xu Yang , Liyuan Liu , Gang Luo","doi":"10.1016/j.displa.2024.102856","DOIUrl":"10.1016/j.displa.2024.102856","url":null,"abstract":"<div><div>Demosaicking of low and ultra-low light images has wide applications in the fields of consumer electronics, security, and industrial machine vision. Denoising is a challenge in the demosaicking process. This study introduces a comprehensive end-to-end low-light demosaicking framework called LLD-GAN (Low Light Demosaicking Generative Adversarial Network), which greatly reduces the computational complexity. Our architecture employs a Wasserstein GAN framework enhanced by a gradient penalty mechanism. We have redesigned the generator based on the UNet++ network as well as its corresponding discriminator, which makes the model learning more efficient. In addition, we propose a new loss metric grounded in the principles of perceptual loss to obtain images with better visual quality. The contribution of Wasserstein GAN with gradient penalty and perceptual loss function was proved to be beneficial by our ablation experiments. For RGB images, we tested the proposed model under a wide range of low light levels, from 1/30 to 1/150 of normal light level, for 16-bit images with added noise. For actual low-light raw sensor images, the model was evaluated under three distinct lighting conditions: 1/100, 1/250, and 1/300 of normal exposure. The qualitative and quantitative comparison against advanced techniques demonstrates the validity and superiority of the LLD-GAN as a unified denoising-demosaicking tool.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102856"},"PeriodicalIF":3.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-18DOI: 10.1016/j.displa.2024.102858
Chengxu Zhou , Yanlin Jiang , Hongyan Liu , Jingchao Cao , Ke Gu
{"title":"Subjective and objective quality evaluation for industrial images","authors":"Chengxu Zhou , Yanlin Jiang , Hongyan Liu , Jingchao Cao , Ke Gu","doi":"10.1016/j.displa.2024.102858","DOIUrl":"10.1016/j.displa.2024.102858","url":null,"abstract":"<div><div>Recently, the demand for ever-better image processing technique continues to grow in field of industrial scenario monitoring and industrial process inspection. The subjective and objective quality evaluation of industrial images are vital for advancing the development of industrial visual perception and enhancing the quality of industrial image/video processing applications. However, the scarcity of publicly available industrial image databases with reliable subjective scores restricts the development of industrial image quality evaluation (IIQE). In preparation for a vacancy, this article first establishes two industrial image databases (i.e., industrial scenario image dataset (ISID) and industrial process image dataset (IPID)) for assessing IIQE metrics. Furthermore, in order to avoid overwhelming industrial image nuances due to the wavelet subband summation, we then present a novel industrial application subband information fidelity standard (SIFS) evaluation method using the channel capacity of visual signals in wavelet domain. Specifically, we first build a visual signals channel model based on perception process from human eyes to brain. Second, we compute and compare the channel capacity for reference and distorted images to measure the information fidelity in each wavelet subband. Third, we sum over the subbands for information fidelity ratio to obtain the overall quality score. Finally, we fairly compare some up-to-date and our proposed image quality evaluation (IQE) methods in two novelty industrial datasets respectively. Our ISID and IPID datasets are capable of evaluating most IQE metrics comprehensively and paves the way for further research on IIQE. Our SIFS model show a remarkable performance comparing with other up-to-date IQE methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102858"},"PeriodicalIF":3.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-17DOI: 10.1016/j.displa.2024.102845
Fei Li , Chang Liu , Xiaomao Li
{"title":"Underwater image enhancement with zero-point symmetry prior and reciprocal mapping","authors":"Fei Li , Chang Liu , Xiaomao Li","doi":"10.1016/j.displa.2024.102845","DOIUrl":"10.1016/j.displa.2024.102845","url":null,"abstract":"<div><div>Images captured underwater typically exhibit color distortion, low brightness, and pseudo-haze due to light absorption and scattering. These degradations limit underwater image display and analysis, and still challenge the performance of current methods. To overcome these drawbacks, we propose a targeted and systematic method. Specifically, based on a key observation and extensive statistical analysis, we develop a Zero-Point Symmetry Prior (ZPSP): the histograms of channels a and b in the Lab color space, for color-balanced images, exhibit a symmetry distribution around the zero-point. Guided by the ZPSP, a Color Histogram Symmetry (CHS) method is proposed to balance color differences between channels a and b by ensuring they adhere to ZPSP. For channel L, a Reciprocal Mapping (RM) method is proposed to remove pseudo-haze and improve brightness, by aligning its reflectance and illumination components with the Dark Channel Prior (DCP) and Bright Channel Prior (BCP), respectively. Relatedly, it employs a divide-and-conquer strategy, distinguishing underwater image degradations in decomposed sub-images and tackling them individually. Notably, the above-proposed methods are integrated into a systematic enhancement framework, while focusing on targeted optimization for each type of degradation. Benefiting from the proposed strategy and methods, various degradations are individually optimized and mutually promoted, consistently producing visually pleasing results. Comprehensive experiments demonstrate that the proposed method exhibits remarkable performance on various underwater image datasets and applications, also showing good generalization ability. The code is available at: <span><span>https://github.com/CN-lifei/ZSRM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102845"},"PeriodicalIF":3.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}