Displays最新文献

筛选
英文 中文
Reinforcement learning path planning method incorporating multi-step Hindsight Experience Replay for lightweight robots 针对轻型机器人的包含多步 "后见之明 "经验回放的强化学习路径规划方法
IF 3.7 2区 工程技术
Displays Pub Date : 2024-07-14 DOI: 10.1016/j.displa.2024.102796
Jiaqi Wang, Huiyan Han, Xie Han, Liqun Kuang, Xiaowen Yang
{"title":"Reinforcement learning path planning method incorporating multi-step Hindsight Experience Replay for lightweight robots","authors":"Jiaqi Wang,&nbsp;Huiyan Han,&nbsp;Xie Han,&nbsp;Liqun Kuang,&nbsp;Xiaowen Yang","doi":"10.1016/j.displa.2024.102796","DOIUrl":"10.1016/j.displa.2024.102796","url":null,"abstract":"<div><p>Home service robots prioritize cost-effectiveness and convenience over the precision required for industrial tasks like autonomous driving, making their task execution more easily. Meanwhile, path planning tasks using Deep Reinforcement Learning(DRL) are commonly sparse reward problems with limited data utilization, posing challenges in obtaining meaningful rewards during training, consequently resulting in slow or challenging training. In response to these challenges, our paper introduces a lightweight end-to-end path planning algorithm employing with hindsight experience replay(HER). Initially, we optimize the reinforcement learning training process from scratch and map the complex high-dimensional action space and state space to the representative low-dimensional action space. At the same time, we improve the network structure to decouple the model navigation and obstacle avoidance module to meet the requirements of lightweight. Subsequently, we integrate HER and curriculum learning (CL) to tackle issues related to inefficient training. Additionally, we propose a multi-step hindsight experience replay (MS-HER) specifically for the path planning task, markedly enhancing both training efficiency and model generalization across diverse environments. To substantiate the enhanced training efficiency of the refined algorithm, we conducted tests within diverse Gazebo simulation environments. Results of the experiments reveal noteworthy enhancements in critical metrics, including success rate and training efficiency. To further ascertain the enhanced algorithm’s generalization capability, we evaluate its performance in some ”never-before-seen” simulation environment. Ultimately, we deploy the trained model onto a real lightweight robot for validation. The experimental outcomes indicate the model’s competence in successfully executing the path planning task, even on a small robot with constrained computational resources.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102796"},"PeriodicalIF":3.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141690713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reduction of short-time image sticking in organic light-emitting diode display through transient analysis of low-temperature polycrystalline silicon thin-film transistor 通过低温多晶硅薄膜晶体管的瞬态分析减少有机发光二极管显示屏的短时图像粘滞现象
IF 3.7 2区 工程技术
Displays Pub Date : 2024-07-09 DOI: 10.1016/j.displa.2024.102794
Jiwook Hong , Jaewon Lim , Jongwook Jeon
{"title":"Reduction of short-time image sticking in organic light-emitting diode display through transient analysis of low-temperature polycrystalline silicon thin-film transistor","authors":"Jiwook Hong ,&nbsp;Jaewon Lim ,&nbsp;Jongwook Jeon","doi":"10.1016/j.displa.2024.102794","DOIUrl":"10.1016/j.displa.2024.102794","url":null,"abstract":"<div><p>Accurate compensation operation of low-temperature polycrystalline-silicon (LTPS) thin-film transistor (TFT) in pixel circuits is crucial to achieve steady and uniform luminance in organic light-emitting diode (OLED) display panels. However, the device characteristics fluctuate over time due to various traps in the LTPS thin film transistor and at the interface with the gate insulator, resulting in abnormal phenomena such as short-time image sticking and luminance fluctuation, which degrade display quality during image change. Considering these phenomena, transient analysis was conducted through device simulation to optimize the pixel compensation circuit. In particular, we analyzed the behavior of traps within LTPS TFT in correlation with compensation circuit operation, and based on this, proposed a methodology for designing a reset voltage scheme for the driver TFT to reduce the image sticking phenomenon.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102794"},"PeriodicalIF":3.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001586/pdfft?md5=af589a6e358a315d9e0495f42299ea93&pid=1-s2.0-S0141938224001586-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141697954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images MSAug:遥感图像语义分割中稀有类别的多策略增强功能
IF 3.7 2区 工程技术
Displays Pub Date : 2024-07-08 DOI: 10.1016/j.displa.2024.102779
Zhi Gong , Lijuan Duan , Fengjin Xiao , Yuxi Wang
{"title":"MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images","authors":"Zhi Gong ,&nbsp;Lijuan Duan ,&nbsp;Fengjin Xiao ,&nbsp;Yuxi Wang","doi":"10.1016/j.displa.2024.102779","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102779","url":null,"abstract":"<div><p>Recently, remote sensing images have been widely used in many scenarios, gradually becoming the focus of social attention. Nevertheless, the limited annotation of scarce classes severely reduces segmentation performance. This phenomenon is more prominent in remote sensing image segmentation. Given this, we focus on image fusion and model feedback, proposing a multi-strategy method called MSAug to address the remote sensing imbalance problem. Firstly, we crop rare class images multiple times based on prior knowledge at the image patch level to provide more balanced samples. Secondly, we design an adaptive image enhancement module at the model feedback level to accurately classify rare classes at each stage and dynamically paste and mask different classes to further improve the model’s recognition capabilities. The MSAug method is highly flexible and can be plug-and-play. Experimental results on remote sensing image segmentation datasets show that adding MSAug to any remote sensing image semantic segmentation network can bring varying degrees of performance improvement.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102779"},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141605245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADS-VQA: Adaptive sampling model for video quality assessment ADS-VQA:用于视频质量评估的自适应采样模型
IF 3.7 2区 工程技术
Displays Pub Date : 2024-07-04 DOI: 10.1016/j.displa.2024.102792
Shuaibo Cheng, Xiaopeng Li, Zhaoyuan Zeng, Jia Yan
{"title":"ADS-VQA: Adaptive sampling model for video quality assessment","authors":"Shuaibo Cheng,&nbsp;Xiaopeng Li,&nbsp;Zhaoyuan Zeng,&nbsp;Jia Yan","doi":"10.1016/j.displa.2024.102792","DOIUrl":"10.1016/j.displa.2024.102792","url":null,"abstract":"<div><p>No-reference video quality assessment (NR-VQA) for user-generated content (UGC) plays a crucial role in ensuring the quality of video services. Although some works have achieved impressive results, their performance-complexity trade-off is still sub-optimal. On the one hand, overly complex network structures and additional inputs require more computing resources. On the other hand, the simple sampling methods have tended to overlook the temporal characteristics of the videos, resulting in the degradation of local textures and potential distortion of the thematic content, consequently leading to the performance decline of the VQA technologies. Therefore, in this paper, we propose an enhanced NR-VQA model, known as the Adaptive Sampling Strategy for Video Quality Assessment (ADS-VQA). Temporally, we conduct non-uniform sampling on videos utilizing features from the lateral geniculate nucleus (LGN) to capture the temporal characteristics of videos. Spatially, a dual-branch structure is designed to supplement spatial features across different levels. The one branch samples patches at their raw resolution, effectively preserving the local texture detail. The other branch performs a downsampling process guided by saliency cues, attaining global semantic features with a diminished computational expense. Experimental results demonstrate that the proposed approach achieves high performance at a lower computational cost than most state-of-the-art VQA models on four popular VQA databases.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102792"},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141636469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridge the gap between practical application scenarios and cartoon character detection: A benchmark dataset and deep learning model 缩小实际应用场景与卡通人物检测之间的差距:基准数据集和深度学习模型
IF 3.7 2区 工程技术
Displays Pub Date : 2024-07-04 DOI: 10.1016/j.displa.2024.102793
Zelu Qi, Da Pan, Tianyi Niu, Zefeng Ying, Ping Shi
{"title":"Bridge the gap between practical application scenarios and cartoon character detection: A benchmark dataset and deep learning model","authors":"Zelu Qi,&nbsp;Da Pan,&nbsp;Tianyi Niu,&nbsp;Zefeng Ying,&nbsp;Ping Shi","doi":"10.1016/j.displa.2024.102793","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102793","url":null,"abstract":"<div><p>The success of deep learning in the field of computer vision makes cartoon character detection (CCD) based on target detection expected to become an effective means of protecting intellectual property rights. However, due to the lack of suitable cartoon character datasets, CCD is still a less explored field, and there are still many problems that need to be solved to meet the needs of practical applications such as merchandise, advertising, and patent review. In this paper, we propose a new challenging CCD benchmark dataset, called CCDaS, which consists of 140,339 images of 524 famous cartoon characters from 227 cartoon works, game works, and merchandise innovations. As far as we know, CCDaS is currently the largest dataset of CCD in practical application scenarios. To further study CCD, we also provide a CCD algorithm that can achieve accurate detection of multi-scale objects and facially similar objects in practical application scenarios, called multi-path YOLO (MP-YOLO). Experimental results show that our MP-YOLO achieves better detection results on the CCDaS dataset. Comparative and ablation studies further validate the effectiveness of our CCD dataset and algorithm.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102793"},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-stage coarse-to-fine progressive enhancement network for single-image HDR reconstruction 用于单图像 HDR 重建的多级粗到细逐行增强网络
IF 3.7 2区 工程技术
Displays Pub Date : 2024-07-03 DOI: 10.1016/j.displa.2024.102791
Wei Zhang , Gangyi Jiang , Yeyao Chen , Haiyong Xu , Hao Jiang , Mei Yu
{"title":"Multi-stage coarse-to-fine progressive enhancement network for single-image HDR reconstruction","authors":"Wei Zhang ,&nbsp;Gangyi Jiang ,&nbsp;Yeyao Chen ,&nbsp;Haiyong Xu ,&nbsp;Hao Jiang ,&nbsp;Mei Yu","doi":"10.1016/j.displa.2024.102791","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102791","url":null,"abstract":"<div><p>Compared with traditional imaging, high dynamic range (HDR) imaging technology can record scene information more accurately, thereby providing users higher quality of visual experience. Inverse tone mapping is a direct and effective way to realize single-image HDR reconstruction, but it usually suffers from some problems such as detail loss, color deviation and artifacts. To solve the problems, this paper proposes a multi-stage coarse-to-fine progressive enhancement network (named MSPENet) for single-image HDR reconstruction. The entire multi-stage network architecture is designed in a progressive manner to obtain higher-quality HDR images from coarse-to-fine, where a mask mechanism is used to eliminate the effects of over-exposure regions. Specifically, in the first two stages, two asymmetric U-Nets are constructed to learn the multi-scale information of input image and perform coarse reconstruction. In the third stage, a residual network with channel attention mechanism is constructed to learn the fusion of progressively transferred multi-level features and perform fine reconstruction. In addition, a multi-stage progressive detail enhancement mechanism is designed, including progressive gated recurrent unit fusion mechanism and multi-stage feature transfer mechanism. The former fuses the progressively transferred features with coarse HDR features to reduce the error stacking effect caused by multi-stage networks. Meanwhile, the latter fuses early features to supplement the lost information during each stage of feature delivery and combines features from different stages. Extensive experimental results show that the proposed method can reconstruct higher quality HDR images and effectively recover texture and color information in over-exposure regions compared to the state-of-the-art methods.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102791"},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring product style perception: A comparative eye-tracking analysis of users across varying levels of self-monitoring 探索产品风格感知:对不同自我监控水平用户的眼动跟踪比较分析
IF 3.7 2区 工程技术
Displays Pub Date : 2024-06-28 DOI: 10.1016/j.displa.2024.102790
Yao Wang, Yang Lu, Cheng-Yi Shen, Shi-Jian Luo, Long-Yu Zhang
{"title":"Exploring product style perception: A comparative eye-tracking analysis of users across varying levels of self-monitoring","authors":"Yao Wang,&nbsp;Yang Lu,&nbsp;Cheng-Yi Shen,&nbsp;Shi-Jian Luo,&nbsp;Long-Yu Zhang","doi":"10.1016/j.displa.2024.102790","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102790","url":null,"abstract":"<div><p>Digital shopping applications and platforms offer consumers a numerous array of products with diverse styles and style attributes. Existing literature suggests that style preferences are determined by consumers’ genders, ages, education levels, and nationalities. In this study, we argue the feasibility and necessity of self-monitoring as an additional consumer variable impacting product style perception and preference through the utilization of eye-tracking technology. Three eye-movement experiments were conducted on forty-two participants (twenty males and twenty-two females; Age: M <span><math><mo>=</mo></math></span> 22.8, SD <span><math><mo>=</mo></math></span> 1.63). The results showed participants with higher levels of self-monitoring exhibited shorter total fixation durations and fewer fixation counts while examining images of watch product styles. In addition, gender exerted an interaction effect on self-monitoring’s impact, with female participants of high self-monitoring ability able to perceive differences in product styles more rapidly and with greater sensitivity. Overall, the results highlight the utility of self-monitoring as a research variable in product style perception investigations, as well as its implication for style intelligence classifiers, and style neuroimaging.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102790"},"PeriodicalIF":3.7,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141605246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Viewing preferences of ASD children on paintings 自闭症儿童的绘画观赏偏好
IF 3.7 2区 工程技术
Displays Pub Date : 2024-06-27 DOI: 10.1016/j.displa.2024.102788
Ji-Feng Luo , Xinding Xia , Zhihao Wang , Fangyu Shi , Zhijuan Jin
{"title":"Viewing preferences of ASD children on paintings","authors":"Ji-Feng Luo ,&nbsp;Xinding Xia ,&nbsp;Zhihao Wang ,&nbsp;Fangyu Shi ,&nbsp;Zhijuan Jin","doi":"10.1016/j.displa.2024.102788","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102788","url":null,"abstract":"<div><p>The eye movement patterns of children with autism spectrum disorder (ASD) based on high-precision, high-sampling-rate professional eye trackers have been widely studied. Still, the equipment used in these studies is expensive and requires skilled operators, and the stimuli are focused on pictures or videos created out of ASD group. We utilized a previously developed eye-tracking device using a tablet, and the double-column paintings with one column from children with ASD, and the other from typically developing (TD) children as stimuli, to investigate the preference of ASD children for the paintings created within their group. This study collected eye movement data from 82 children with ASD and 102 TD children, and an adaptive eye movement classification algorithm was applied to the data aligned by the sampling rate, followed by feature extraction and statistical analysis from the aspects of time, frequency, range, and clustering. Statistical tests indicate that apart from displaying more pronounced non-compliance during the experiment, resulting in a higher data loss rate, children with ASD did not show significant preferences in viewing the two types of paintings compared to TD children. Therefore, we tend to believe that there is no significant difference in preference for ASD and TD paintings showcasing as diptych from the two groups of children using our eye tracking device, and their feature values indicate that they do not have a viewing preference for the paintings.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102788"},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A pyramid auxiliary supervised U-Net model for road crack detection with dual-attention mechanism 用于双关注机制道路裂缝检测的金字塔辅助监督 U-Net 模型
IF 3.7 2区 工程技术
Displays Pub Date : 2024-06-27 DOI: 10.1016/j.displa.2024.102787
Yingxiang Lu, Guangyuan Zhang, Shukai Duan, Feng Chen
{"title":"A pyramid auxiliary supervised U-Net model for road crack detection with dual-attention mechanism","authors":"Yingxiang Lu,&nbsp;Guangyuan Zhang,&nbsp;Shukai Duan,&nbsp;Feng Chen","doi":"10.1016/j.displa.2024.102787","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102787","url":null,"abstract":"<div><p>The application of road crack detection technology plays a pivotal role in the domain of transportation infrastructure management. However, the diversity of crack morphologies within images and the complexity of background noise still pose significant challenges to automated detection technologies. This necessitates that deep learning models possess more precise feature extraction capabilities and resistance to noise interference. In this paper, we propose a pyramid auxiliary supervised U-Net model with Dual-Attention mechanism. Pyramid auxiliary supervision module is integrated into the U-Net model, alleviating information loss at the encoder end due to pooling operations, thereby enhancing its global perception capability. Besides, within dual-attention module, our model learns crucial segmentation features both at the pixel and channel levels. These enable our model to avoid noise interference and achieve a higher level of precision in crack pixel segmentation. To substantiate the superiority and generalizability of our model, we conducted a comprehensive performance evaluation using public datasets. The experimental results indicate that our model surpasses current great methods. Additionally, we performed ablation studies to confirm the efficacy of the proposed modules.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102787"},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141480278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elemental image array generation based on BVH structure combined with spatial partition and display optimization 基于 BVH 结构的元素图像阵列生成与空间分割和显示优化相结合
IF 3.7 2区 工程技术
Displays Pub Date : 2024-06-27 DOI: 10.1016/j.displa.2024.102784
Tianshu Li, Shigang Wang, Jian Wei, Yan Zhao, Chenxi song, Rui Zhang
{"title":"Elemental image array generation based on BVH structure combined with spatial partition and display optimization","authors":"Tianshu Li,&nbsp;Shigang Wang,&nbsp;Jian Wei,&nbsp;Yan Zhao,&nbsp;Chenxi song,&nbsp;Rui Zhang","doi":"10.1016/j.displa.2024.102784","DOIUrl":"10.1016/j.displa.2024.102784","url":null,"abstract":"<div><p>Integral imaging display has been widely used because of its features such as full parallax viewing, high comfort without glasses, simple structure and easy implementation. This paper enhances the speed of EIA generation by optimizing the acceleration structure of the ray tracing algorithm. By considering the characteristics of segmental objects captured by the camera during integral image rendering, a novel accelerating structure is constructed by combining BVH with the camera space. The BVH traversal is expanded into a 4-tree based on depth priority order to reduce hierarchy and expedite hit point detection. Additionally, the parameters of the camera array are constrained according to the reconstructed three-dimensional (3D) image range, ensuring optimal object coverage on screen. Experimental results demonstrate that this algorithm reduces ray tracing time for hitting triangle grid of collision objects while automatically determining display range for stereo images and adjusting camera parameters accordingly, thereby maximizing utilization of integrated imaging display resources.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102784"},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信