DisplaysPub Date : 2026-07-01Epub Date: 2026-02-08DOI: 10.1016/j.displa.2026.103383
Yanpu Zhao , Faming Gong , Chengze Du , Xiaofeng Ji , Dongxu Li , Xing Yan , Junjie Xu
{"title":"Run as one: CLIP-based semantic fusion hashing for multi-modal retrieval","authors":"Yanpu Zhao , Faming Gong , Chengze Du , Xiaofeng Ji , Dongxu Li , Xing Yan , Junjie Xu","doi":"10.1016/j.displa.2026.103383","DOIUrl":"10.1016/j.displa.2026.103383","url":null,"abstract":"<div><div>Multi-modal hashing has received considerable attention from the multimedia community for its ability to fuse data from multiple sources for composite retrieval, while effectively improving retrieval and storage efficiency. However, most of the existing methods not only struggle to further deepen the potential representations in the native image-text pair information, but these methods also face challenges in bridging the semantic gap between heterogeneous modalities. To address these issues, we propose a CLIP-based Semantic Fusion Multi-modal Hashing (CSFMH) framework. Specifically, we use a Contrastive Language-Image Pre-training (CLIP) model to process raw image-text pairs to extract richer visual and textual features. In addition, we propose a multi-modal semantic fusion module and a multi-modal hash learning module that utilize contrastive learning to map heterogeneous features into a unified embedding space, yielding a robust and compact semantic representation. To the best of our knowledge, this is the first attempt to integrate CLIP into multi-modal hashing. Extensive experiments on three benchmark datasets (MIR Flickr, NUS-WIDE, and MS COCO) show that CSFMH consistently outperforms state-of-the-art methods, achieving up to 5.7% improvement in mean Average Precision (mAP) for multi-modal retrieval tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103383"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-01-28DOI: 10.1016/j.displa.2026.103359
Qiwen Yuan , Jiajie Chen , Zhendong Shi
{"title":"Hybrid detection model for unauthorized use of doctor’s code in health insurance: Integrating rule-based screening and LLM reasoning","authors":"Qiwen Yuan , Jiajie Chen , Zhendong Shi","doi":"10.1016/j.displa.2026.103359","DOIUrl":"10.1016/j.displa.2026.103359","url":null,"abstract":"<div><div>Unauthorized use of doctor’s code is a high-risk and context-dependent issue in health insurance supervision. Traditional rule-based screening achieves high recall but often produces false positives in cases that appear anomalous yet are clinically legitimate, such as telemedicine encounters, refund-related re-settlements, and rapid outpatient–emergency transitions. These methods lack semantic understanding of medical context and rely heavily on manual auditing. We propose a hybrid detection framework that integrates rule-based temporal filtering with large language model (LLM)–based semantic reasoning. Time-threshold rules are first applied to extract suspected cases from real health-insurance claim data. Expert-derived legitimate scenario patterns are then embedded into structured prompts to guide the LLM in semantic plausibility assessment and false-positive reduction. For evaluation, we construct a 240-pair multi-scenario benchmark dataset from de-identified real claim records, covering both reasonable and suspicious situations. Zero-shot experiments with DeepSeek-R1-7B show that the framework achieves 75% accuracy and 87% precision in distinguishing reasonable from unauthorized cases. These results indicate that the proposed method can effectively reduce false alarms and alleviate manual audit workload, providing a practical and efficient solution for real-world health-insurance supervision.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103359"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-02-04DOI: 10.1016/j.displa.2026.103379
Changrui Zhu , Ernst Kruijff , Harvey Stedman , Vijay M. Pawar , Simon Julier
{"title":"Does display type matter for change detection? comparing immersive and non-immersive displays under low and high semantic availability","authors":"Changrui Zhu , Ernst Kruijff , Harvey Stedman , Vijay M. Pawar , Simon Julier","doi":"10.1016/j.displa.2026.103379","DOIUrl":"10.1016/j.displa.2026.103379","url":null,"abstract":"<div><div>Change detection is a cognitively challenging process that involves three stages: spotting (becoming aware of a change); localising (establishing the specific location of the change); and identifying (recognising the nature of the change). Each of these stages has the potential to be influenced by both the way the data is presented (e.g., display type) and the fidelity of that data. To explore these issues, we conducted two studies, both of which looked at the effects of display type (immersive virtual reality (VR) or desktop monitor (DM)), and the semantic availability of the scene (low or high realism).</div><div>Study 1 (<span><math><mrow><mi>N</mi><mo>=</mo><mn>38</mn></mrow></math></span>) explored the VR–DM differences in a broad scope, which examined six change types spanning both spatial and non-spatial changes—<em>disappear</em>, <em>appear</em>, <em>translation</em>, <em>rotation</em>, <em>replacement</em>, and <em>colour</em>. However, there were no significant differences between VR and DM in spotting, localising, and identifying at either level of (semantic) realism. Study 2 (<span><math><mrow><mi>N</mi><mo>=</mo><mn>20</mn></mrow></math></span>) followed this up by exploring only two types of spatial change (<em>translation</em> and <em>rotation</em>) at a much finer degree of granularity while retaining the same experimental paradigm with necessary refinement. Study 2 showed a significant VR advantage over DM, with different patterns across realism conditions: In low-realism scenes, VR significantly outperformed DM on localisation and change-type identification overall, with the largest VR–DM contrasts observed for the smallest <em>translations</em>. In high-realism scenes, the only significant effect was a display-by-magnitude interaction for change-type identification at the smallest <em>translations</em>. Taken both studies together, VR benefits are most likely for subtle spatial changes, particularly small <em>translations</em>, when the semantic availability is limited. Questionnaire ratings also suggested that reliance on visual features varies with semantic availability. Semantic cues were rated significantly higher than other features in high realism scenes only. Finally, there is no significant difference between VR and DM in terms of workload, motion sickness and self-confidence, suggesting that the perceptual advantages of VR come with no additional physical or cognitive costs for change detection.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103379"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-02-05DOI: 10.1016/j.displa.2026.103378
Pengyun Chen , Ning Cao , Minghui Jiang , Zhuoyu Jin , Hao Liu , Xiaoheng Jiang , Mingliang Xu
{"title":"A lightweight multi-weather image restoration via conflict-aware prompt learning","authors":"Pengyun Chen , Ning Cao , Minghui Jiang , Zhuoyu Jin , Hao Liu , Xiaoheng Jiang , Mingliang Xu","doi":"10.1016/j.displa.2026.103378","DOIUrl":"10.1016/j.displa.2026.103378","url":null,"abstract":"<div><div>Existing adverse weather image restoration methods face two critical limitations. Firstly, shared-parameter architectures and mixed training strategies overlook the essential differences in frequency-domain feature distributions across various degradation types, resulting in gradient conflicts and objective inconsistency during multi-task optimization. Secondly, they often require heavy parameters and high computation, hindering practical deployment. To address these challenges, we propose a lightweight adverse weather image restoration framework based on conflict-aware prompt learning. It mainly comprises two components: a Conflict-Aware Prompting (CAP) module and a Large-Kernel Prompt-Guided Interaction (LKPGI) module. The CAP module employs low-rank decomposition to model latent correlations across multiple weather removal tasks. It further incorporates learnable adaptive weights for dynamic prompt fusion, effectively mitigating inter-task feature interference. The LKPGI module utilizes large-receptive-field convolutions to enhance spatial interaction and contextual modeling between prompt features and image representations. This mechanism improves the effectiveness of prompt guidance during restoration. Additionally, we propose a Dual-Path Efficient Attention (DPEA) block as the backbone, which adopts a deep-shallow hierarchical design to optimize local detail preservation and global semantic correlation, respectively. This design enhances multi-scale feature representation while keeping the model lightweight. Experimental results demonstrate that our method achieves superior restoration performance across multiple adverse weather conditions while significantly reducing parameter count compared to most existing methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103378"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-02-06DOI: 10.1016/j.displa.2026.103382
Xiaoye Michael Wang , Matthew Prenevost , Aneesh Tarun , Ian Robinson , Michael Nitsche , Gabby Resch , Ali Mazalek , Timothy N. Welsh
{"title":"Investigating a geometrical solution to the vergence-accommodation conflict for targeted movements in virtual reality","authors":"Xiaoye Michael Wang , Matthew Prenevost , Aneesh Tarun , Ian Robinson , Michael Nitsche , Gabby Resch , Ali Mazalek , Timothy N. Welsh","doi":"10.1016/j.displa.2026.103382","DOIUrl":"10.1016/j.displa.2026.103382","url":null,"abstract":"<div><div>Virtual reality (VR) uses head-mounted displays to simulate perceptuomotor experiences in physical environments. However, this mediated approach alters how visual information is presented compared to naturalistic viewing, which can affect user performance. One critical challenge in VR development is the vergence-accommodation conflict (VAC), which stems from the inherent limitations of approximating natural viewing geometry through digital displays. Although various hardware and software solutions have been proposed to address VAC, no commercially viable option has been universally adopted. This paper presents and evaluates a software solution grounded in a vision-based geometrical model of VAC that alleviates VAC’s impact on movement in VR. This model predicts the impact of VAC as a constant offset to the vergence angle, distorting the binocular viewing geometry that results in movement undershooting. In Experiment 1, a 3D pointing task validated the model’s predictions and demonstrated that VAC primarily affects online movements involving real-time visual feedback. Experiment 2 implemented a shader program to rectify the effect of VAC, improving movement accuracy by approximately 30%. Overall, this work presented a practical approach to reducing the impact of VAC on HMD-based manual interactions, enhancing the user experience in virtual environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103382"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-02-04DOI: 10.1016/j.displa.2026.103376
Yongqing Cai, Cheng Han, Wei Quan, Yuechen Zhang
{"title":"A visual Attention-Based model for VR sickness assessment","authors":"Yongqing Cai, Cheng Han, Wei Quan, Yuechen Zhang","doi":"10.1016/j.displa.2026.103376","DOIUrl":"10.1016/j.displa.2026.103376","url":null,"abstract":"<div><div>With the annual increase in Virtual Reality (VR) products and content, an increasing number of users are engaging with VR videos. However, many users experience discomfort such as headaches and dizziness during VR experiences, a phenomenon known as VR sickness. To enhance user comfort during VR experiences, this study proposes a VR sickness assessment model based on visual attention mechanisms, enabling automatic classification of VR content so users can select experiences suitable for their needs. The proposed model comprises an attention stream subnetwork, inspired by user attention mechanisms, and a motion stream subnetwork, jointly forming a dual-stream evaluation system. Leveraging a transformer architecture, the model establishes self-attention mechanisms over temporal and spatial sequences to capture their interdependent features. A multi-level fusion strategy is employed to extract low-level, high-level, and global features, while attention mechanisms adaptively integrate these multi-level features, achieving precise VR sickness assessment results. Experiments conducted on publicly available datasets demonstrate the effectiveness of the visual attention mechanism in improving model assessment accuracy. The model achieved 88.18% and 92.22% accuracy on two public datasets, respectively, representing a significant performance improvement compared to existing studies.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103376"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-02-10DOI: 10.1016/j.displa.2026.103390
Aditya Singh , Meet Anil Bhanushali , Jingwu Luo , Gang Luo , Shrinivas Pundlik
{"title":"Assisting the blind to reach daily objects using smart glasses","authors":"Aditya Singh , Meet Anil Bhanushali , Jingwu Luo , Gang Luo , Shrinivas Pundlik","doi":"10.1016/j.displa.2026.103390","DOIUrl":"10.1016/j.displa.2026.103390","url":null,"abstract":"<div><div>Searching for objects in their surrounding is challenging for blind and visually impaired individuals (BVI) in daily life. Current assistive technologies powered by large language models (LLMs) and vision language models (VLMs) can offer BVI scene descriptions through conversations. However, communication is often inefficient in helping BVI to reach daily objects or destinations, because those general purpose LLMs/VLMs are not optimized for interpreting or conveying spatial information. We developed a smart glass solution that can utilize open vocabulary object detection models to aid BVI in searching/reaching for a variety of specific objects that are not limited to fixed categories of model training. In our implementations, video streams from the glasses can be processed using open vocabulary object detection models either locally or on other connected devices, such as a smartphone or computer. User can input custom search prompt verbally. This hands-free solution allows people to naturally scan their surroundings by moving their heads, and the stereo audio tones provide directional cues in horizontal and vertical directions to help zero in on the targets, so that it becomes possible to reach these objects accurately. We conducted a human subject pilot study involving 5 blindfolded individuals who reached specific objects (e.g. grabbing the red bottle; reaching the empty chair) among other distractors. The smart glasses solution was compared with Ray-Ban Meta glasses that were running built-in Meta AI for scene recognition. The average task time with our solution (53 s) was significantly lower than Meta glasses (126 s, p<0.001). The device was also demonstrated to successfully aid a blind user in a grocery shopping scenario. This work shows that active orientation guidance, which is typically lacking in VLMs but provided by our smart glasses solution, can aid in interaction with surrounding environment, such as when reaching for objects and destinations.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103390"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-01-29DOI: 10.1016/j.displa.2026.103369
Peitong Han, Yan Zhao, Jian Wei, Shibo Wang, Shigang Wang
{"title":"Eye-tracking-based perceptual performance evaluation for multi-screen spliced aircraft","authors":"Peitong Han, Yan Zhao, Jian Wei, Shibo Wang, Shigang Wang","doi":"10.1016/j.displa.2026.103369","DOIUrl":"10.1016/j.displa.2026.103369","url":null,"abstract":"<div><div>The evolution from conventional glass cockpits to enclosed, multi-screen display configurations has significantly increased visual complexity and pilot cognitive workload, posing new challenges for the scientific assessment of visual perception. Current evaluation methods primarily rely on subjective questionnaires, which lack objectivity, timeliness, and cannot support real-time cockpit optimization. To overcome these limitations, this study presents an objective visual perception assessment approach for closed cockpit environments. Specifically, three novel eye-tracking indicators – perceptual continuity, visual responsiveness, and focus degree – are proposed and extracted using algorithms developed in this work. These indicators are fused through a regression-based model to achieve non-intrusive and quantitative perception evaluation based on eye-tracking data collected in real time. Experiments conducted in a three-screen splicing scenario demonstrate that the proposed method achieves high prediction accuracy and robustness, providing an effective tool for optimizing cockpit display design and monitoring pilot perceptual states during flight operations.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103369"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-02-04DOI: 10.1016/j.displa.2026.103381
Mengxue Yan , Zirui Wang , Zhenfeng Li , Peng Wang , Pang Wu , Xianxiang Chen , Lidong Du , Li Li , Hongbo Chang , Zhen Fang
{"title":"Localization knowledge-driven segmentation of arteries in ultrasound images","authors":"Mengxue Yan , Zirui Wang , Zhenfeng Li , Peng Wang , Pang Wu , Xianxiang Chen , Lidong Du , Li Li , Hongbo Chang , Zhen Fang","doi":"10.1016/j.displa.2026.103381","DOIUrl":"10.1016/j.displa.2026.103381","url":null,"abstract":"<div><div>Accurate segmentation of the arterial lumen in ultrasound images is crucial for clinical diagnosis and hemodynamic assessment, but is challenged by inherent image properties such as low contrast, artifacts, and surrounding tissues with similar morphology. These factors conjointly lead to significant localization ambiguity, which severely hampers the performance of segmentation models. To address this issue, we propose a novel localization knowledge-driven Segmentation (LKDS) framework, which guides accurate segmentation through explicit localization. The proposed framework first acquires robust localization knowledge through a Localization Prior Learning (LPL) process on a coarsely-annotated dataset, which is then efficiently transferred and adapted to target datasets via a few-shot pseudo-labeling strategy. Operationally, the LKDS framework generates a dynamic Localization Map (LM) for each image to explicitly guide a subsequent network in performing accurate segmentation. Extensive experiments on two distinct arterial ultrasound datasets show that our LKDS framework not only accelerates training convergence but also significantly outperforms state-of-the-art implicit segmentation methods. Our work demonstrates that explicitly incorporating localization knowledge is an effective strategy for significantly enhancing the performance of arterial segmentation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103381"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2026-07-01Epub Date: 2026-02-05DOI: 10.1016/j.displa.2026.103375
Edanur Fettahoğlu, Erkan Aydıntan
{"title":"Digital experience in physical space: The design process of hybrid spaces","authors":"Edanur Fettahoğlu, Erkan Aydıntan","doi":"10.1016/j.displa.2026.103375","DOIUrl":"10.1016/j.displa.2026.103375","url":null,"abstract":"<div><div>Digital culture, reflected in every aspect of human life, has a direct impact on art and architecture. The developments brought about by digitalization in these fields have paved the way for the formation of hybrid spaces that offer a digital spatial experience to the user. By bridging physical and digital realities, these spaces introduce design processes that differ from established architectural workflows, making their internal structure and development a subject of inquiry. Accordingly, the aim of this study is to examine the internal structure and developmental stages of hybrid space design processes within architectural practice, and to discuss the design-related innovations these processes introduce to architectural design thinking.</div><div>The necessity of addressing hybrid spaces created as a result of a multidisciplinary design process in the context of architectural design guided this research. The theoretical framework of the study was established during the preparation phase of the study, which was conducted in four stages. Data were obtained from three sources: analyses of project examples, interviews with design studios, and on-site detection studies. In the subsequent data analysis phase, the data were first examined separately, then the entire data set was evaluated, and finally analyzed holistically using triangulation. The analysis results were presented schematically and discussed. The findings showed that hybrid spaces not only brought innovations to the architectural design process but also transformed the tools and methods used and the team structures. This trend requires adaptable, iterative design strategies that integrate digital and physical spaces and involve technology-focused, interdisciplinary collaboration.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103375"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}