Virtual Reality Intelligent Hardware最新文献

筛选
英文 中文
A data-based real-time petrochemical gas diffusion simulation approach on virtual reality 基于数据的虚拟现实石油化工气体扩散实时模拟方法
Virtual Reality Intelligent Hardware Pub Date : 2023-06-01 DOI: 10.1016/j.vrih.2023.01.001
Min Yang , Yong Han , Chang Su , Xue Li
{"title":"A data-based real-time petrochemical gas diffusion simulation approach on virtual reality","authors":"Min Yang ,&nbsp;Yong Han ,&nbsp;Chang Su ,&nbsp;Xue Li","doi":"10.1016/j.vrih.2023.01.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.01.001","url":null,"abstract":"<div><h3>Background</h3><p>Petrochemical products are flammable, explosive, and toxic, petrochemical accidents are generally extremely destructive. Therefore, disaster analysis and prediction and real-time simulation have become important means to control and reduce accident hazards.</p></div><div><h3>Methods</h3><p>In this study, a complete real-time simulation solution of gas diffusion with coordinate data and concentration data is proposed, which is mainly aimed at the simulation of the types of harmful gas leakage and diffusion accidents in the petrochemical industry. The rendering effect is more continuous and accurate through grid homogenization and trilinear interpolation. A data processing and rendering parallelization process is presented to improve simulation efficiency. Combines gas concentration and fragment transparency to synthesize transparent pixels in a scene. To ensure the approximate accuracy of the rendering effect, improve the efficiency of real-time rendering, and meet the requirement of intuitive perception using concentration data, a weighted blended order-independent transparency with enhanced alpha weight is presented, which can provide a more intuitive perception of hierarchical information of concentration data while preserving depth information. In this study, three order-independent transparency algorithms, depth peeling algorithm, weighted blended order-independent transparency and weighted blended order-independent transparency with enhanced alpha weight, are compared and analyzed for rendering image quality, rendering time, required memory, hierarchical information and so on.</p></div><div><h3>Results</h3><p>Using weighted blended order-independent transparency with enhanced alpha weight technique, the rendering time is shortened by 53.2% compared with the depth peeling algorithm, and the texture memory required is much smaller than the depth peeling algorithm. The rendering results of weighted blended order-independent transparency with enhanced alpha weight are approximately accurate compared with the depth peeling algorithm as ground truth, and there is no popping when surfaces pass through one another. At the same time, compared with weighted blended order-independent transparency, weighted blended OIT with enhanced alpha weight achieves an intuitive perception of hierarchical information of concentration data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 266-278"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Improvement of Iterative Closest Point with Edges of Projected Image 投影图像边缘迭代最近点的改进
Virtual Reality Intelligent Hardware Pub Date : 2023-06-01 DOI: 10.1016/j.vrih.2022.09.001
Chen Wang
{"title":"The Improvement of Iterative Closest Point with Edges of Projected Image","authors":"Chen Wang","doi":"10.1016/j.vrih.2022.09.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.09.001","url":null,"abstract":"<div><h3>Background</h3><p>There are many regular-shape objects in the artificial environment. It is difficult to distinguish the poses of these objects, when only geometric information is utilized. With the development of sensor technologies, we can utilize other information to solve this problem.</p></div><div><h3>Methods</h3><p>We propose an algorithm to register point clouds by integrating color information. The key idea of the algorithm is that we jointly optimize dense term and edge term. The dense term is built similarly to iterative closest point algorithm. In order to build the edge term, we extract the edges of the images obtained by projecting the point clouds. The edge term prevents the point clouds from sliding in registration. We utilize this loosely coupled method to fuse geometric and color information.</p></div><div><h3>Results</h3><p>The experiments demonstrate that edge image approach improves the precision and the algorithm is robust.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 279-291"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing interactive glazing through an engineering psychology approach: Six augmented reality scenarios that envision future car human-machine interface 通过工程心理学方法设计交互式玻璃:六个增强现实场景,设想未来汽车人机界面
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.07.004
Wei Liu , Yancong Zhu , Ruonan Huang , Takumi Ohashi , Jan Auernhammer , Xiaonan Zhang , Ce Shi , Lu Wang
{"title":"Designing interactive glazing through an engineering psychology approach: Six augmented reality scenarios that envision future car human-machine interface","authors":"Wei Liu ,&nbsp;Yancong Zhu ,&nbsp;Ruonan Huang ,&nbsp;Takumi Ohashi ,&nbsp;Jan Auernhammer ,&nbsp;Xiaonan Zhang ,&nbsp;Ce Shi ,&nbsp;Lu Wang","doi":"10.1016/j.vrih.2022.07.004","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.004","url":null,"abstract":"<div><p>With more and more vehicles becoming autonomous, intelligent, and connected, paying attention to the future usage of car human-machine interface (HMI) with these vehicles should also get more relevant. While car HMI has been addressed in several scientific studies, little attention is being paid to designing and implementing interactive glazing into everyday (autonomous) driving contexts. Through reflecting on what was found before in theory and practice, we describe an engineering psychology practice and the design of six novel future user scenarios, which envision the application of a specific set of augmented reality (AR) support user interactions. We also present evaluations conducted with the scenarios and experiential prototypes and found that these AR scenarios support our target user groups in experiencing a new type of interactions. The overall evaluation was positive, with some valuable assessment results and suggestions. We envision that this paper will interest applied psychology educators who aspire to teach how to operationalize AR in a human-centered design (HCD) process to students with little preexisting expertise or little scientific knowledge about engineering psychology.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 157-170"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Outliers rejection in similar image matching 相似图像匹配中的异常值抑制
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2023.02.004
Qingqing Chen , Junfeng Yao
{"title":"Outliers rejection in similar image matching","authors":"Qingqing Chen ,&nbsp;Junfeng Yao","doi":"10.1016/j.vrih.2023.02.004","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.02.004","url":null,"abstract":"<div><h3>Background</h3><p>Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping. The accuracy of the matching significantly impacted subsequent studies. Because of their local similarity, when image pairs contain comparable patterns but feature pairs are positioned differently, incorrect recognition can occur as global motion consistency is disregarded.</p></div><div><h3>Methods</h3><p>This study proposes an image-matching filtering algorithm based on global motion consistency. It can be used as a subsequent matching filter for the initial matching results generated by other matching algorithms based on the principle of motion smoothness. A particular matching algorithm can first be used to perform the initial matching; then, the rotation and movement information of the global feature vectors are combined to effectively identify outlier matches. The principle is that if the matching result is accurate, the feature vectors formed by any matched point should have similar rotation angles and moving distances. Thus, global motion direction and global motion distance consistencies were used to reject outliers caused by similar patterns in different locations.</p></div><div><h3>Results</h3><p>Four datasets were used to test the effectiveness of the proposed method. Three datasets with similar patterns in different locations were used to test the results for similar images that could easily be incorrectly matched by other algorithms, and one commonly used dataset was used to test the results for the general image-matching problem. The experimental results suggest that the proposed method is more accurate than other state-of-the-art algorithms in identifying mismatches in the initial matching set.</p></div><div><h3>Conclusions</h3><p>The proposed outlier rejection matching method can significantly improve the matching accuracy for similar images with locally similar feature pairs in different locations and can provide more accurate matching results for subsequent computer vision tasks.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 171-187"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49866057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive spatio-temporal attention neural network for crossdatabase micro-expression recognition 跨数据库微表情识别的自适应时空注意神经网络
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.03.006
Yuhan RAN
{"title":"Adaptive spatio-temporal attention neural network for crossdatabase micro-expression recognition","authors":"Yuhan RAN","doi":"10.1016/j.vrih.2022.03.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.03.006","url":null,"abstract":"<div><h3>Background</h3><p>The use of micro-expression recognition to recognize human emotions is one of the most critical challenges in human-computer interaction applications. In recent years, cross-database micro-expression recognition (CDMER) has emerged as a significant challenge in micro-expression recognition and analysis. Because the training and testing data in CDMER come from different micro-expression databases, CDMER is more challenging than conventional micro-expression recognition.</p></div><div><h3>Methods</h3><p>In this paper, an adaptive spatio-temporal attention neural network (ASTANN) using an attention mechanism is presented to address this challenge. To this end, the micro-expression databases SMIC and CASME II are first preprocessed using an optical flow approach, which extracts motion information among video frames that represent discriminative features of micro-expression. After preprocessing, a novel adaptive framework with a spatiotemporal attention module was designed to assign spatial and temporal weights to enhance the most discriminative features. The deep neural network then extracts the cross-domain feature, in which the second-order statistics of the sample features in the source domain are aligned with those in the target domain by minimizing the correlation alignment (CORAL) loss such that the source and target databases share similar distributions.</p></div><div><h3>Results</h3><p>To evaluate the performance of ASTANN, experiments were conducted based on the SMIC and CASME II databases under the standard experimental evaluation protocol of CDMER. The experimental results demonstrate that ASTANN outperformed other methods in relevant crossdatabase tasks.</p></div><div><h3>Conclusions</h3><p>Extensive experiments were conducted on benchmark tasks, and the results show that ASTANN has superior performance compared with other approaches. This demonstrates the superiority of our method in solving the CDMER problem.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 142-156"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Web-based Mixed Reality Video Fusion with Remote Rendering 基于web的混合现实视频融合与远程渲染
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.03.005
Qiang Zhou, Zhong Zhou
{"title":"Web-based Mixed Reality Video Fusion with Remote Rendering","authors":"Qiang Zhou,&nbsp;Zhong Zhou","doi":"10.1016/j.vrih.2022.03.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.03.005","url":null,"abstract":"<div><p>Mixed Reality (MR) video fusion system fuses video imagery with 3D scenes. It makes the scene much more realistic and helps the users understand the video contents and temporalspatial correlation between them, thus reducing the user’s cognitive load. Nowadays, MR video fusion has been used in various applications. However, video fusion systems require powerful client machines because video streaming delivery, stitching, and rendering are computation-intensive. Moreover, huge bandwidth usage is also another critical factor that affects the scalability of video fusion systems. The framework proposed in this paper overcomes this client limitation by utilizing remote rendering. Furthermore, the framework we built is based on browsers. Therefore, the user could try the MR video fusion system with a laptop or even pad, no extra plug-ins or application programs need to be installed. Several experiments on diverse metrics demonstrate the effectiveness of the proposed framework.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 188-199"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression of Surface Texture Acceleration Signal Based on Spectrum Characteristics 基于频谱特征的表面纹理加速信号压缩
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.01.006
Dongyan Nie , Xiaoying Sun
{"title":"Compression of Surface Texture Acceleration Signal Based on Spectrum Characteristics","authors":"Dongyan Nie ,&nbsp;Xiaoying Sun","doi":"10.1016/j.vrih.2022.01.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.01.006","url":null,"abstract":"<div><h3>Background</h3><p>Adequate-data collection could enhance the realism of surface texture haptic online-rendering or offline-playback. A parallel challenge is how to reduce communication delays and improve storage space utilization.</p></div><div><h3>Methods</h3><p>Based on the similarity of the short-term amplitude spectrumtrend, this paper proposes a frequency-domain compression method. A compression framework is designed, firstly to map the amplitude spectrum into a trend similarity grayscale image, compress it with the stillpicture-compression method, and then to adaptively encode the maximum amplitude and part of the initial phase of each time-window, achieving the final compression.</p></div><div><h3>Results</h3><p>The comparison between the original signal and the recovered signal shows that when the time-frequency similarity is 90%, the average compression ratio of our method is 9.85% in the case of a single interact point. The subjective score for the similarity reached an excellent level, with an average score of 87.85.</p></div><div><h3>Conclusions</h3><p>Our method can be used for offline compression of vibrotactile data. For the case of multi-interact points in space, the trend similarity grayscale image can be reused, and the compression ratio is further reduced.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 110-123"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSSTNet: Multi-scale facial videos pulse extraction network based on separable spatiotemporal convolution and dimension separable attention 基于可分时空卷积和维数可分注意力的多尺度面部视频脉冲提取网络
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.07.001
Changchen Zhao , Hongsheng Wang , Yuanjing Feng
{"title":"MSSTNet: Multi-scale facial videos pulse extraction network based on separable spatiotemporal convolution and dimension separable attention","authors":"Changchen Zhao ,&nbsp;Hongsheng Wang ,&nbsp;Yuanjing Feng","doi":"10.1016/j.vrih.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.001","url":null,"abstract":"<div><h3>Background</h3><p>Using remote photoplethysmography (rPPG) to estimate blood volume pulse in a non-contact way is an active research topic in recent years. Existing methods are mainly based on the single-scale region of interest (ROI). However, some noise signals that are not easily separated in single-scale space can be easily separated in multi-scale space. In addition, existing spatiotemporal networks mainly focus on local spatiotemporal information and lack emphasis on temporal information which is crucial in pulse extraction problems, resulting in insufficient spatiotemporal feature modeling.</p></div><div><h3>Methods</h3><p>This paper proposes a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution and dimension separable attention. First, in order to solve the problem of single-scale ROI, we construct a multi-scale feature space for initial signal separation. Secondly, separable spatiotemporal convolution and dimension separable attention are designed for efficient spatiotemporal correlation modeling, which increases the information interaction between long-span time and space dimensions and puts more emphasis on temporal features.</p></div><div><h3>Results</h3><p>The signal-to-noise ratio (SNR) of the proposed network reaches 9.58 dB on the PURE dataset and 6.77 dB on the UBFC-rPPG dataset, which outperforms state-of-the-art algorithms.</p></div><div><h3>Conclusions</h3><p>Results show that fusing multi-scale signals generally obtains better results than methods based on the only single-scale signal. The proposed separable spatiotemporal convolution and dimension separable attention mechanism contributes to more accurate pulse signal extraction.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 124-141"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intelligent Fire Information System Based on 3D GIS 基于三维GIS的智能火灾信息系统
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.07.002
Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Dongmei Yan
{"title":"Intelligent Fire Information System Based on 3D GIS","authors":"Jinxing Hu ,&nbsp;Zhihan Lv ,&nbsp;Diping Yuan ,&nbsp;Bing He ,&nbsp;Dongmei Yan","doi":"10.1016/j.vrih.2022.07.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.002","url":null,"abstract":"<div><p>This work aims to build a comprehensive and effective fire emergency management system based on the Internet of Things (IoT) and achieve an actual intelligent fire rescue. A smart fire protection information system was designed based on the IoT. A detailed analysis was conducted on the problem of rescue vehicle scheduling and the evacuation of trapped persons in the process of fire rescue. The intelligent fire visualization platform based on the three-dimensional (3D) Geographic Information Science (GIS) covers project overview, equipment status, equipment classification, equipment alarm information, alarm classification, alarm statistics, equipment account information, and other modules. The live video accessed through the visual interface can clearly identify the stage of the fire, which facilitates the arrangement of rescue equipment and personnel. The vehicle scheduling model in the system primarily used two objective functions to solve the Pareto Non-Dominated Solution Set Optimization: emergency rescue time and the number of vehicles. In addition, an evacuation path optimization method based on the Improved Ant Colony (IAC) algorithm was designed to realize the dynamic optimization of building fire evacuation paths. The experimental results indicate that all the values of detection signals were significantly larger in the smoldering fire scene at t = 17s than the initial value. In addition, the probability of smoldering fire and the probability of open fire were relatively large according to the probability function of the corresponding fire situation, demonstrating that this model could detect fire. The IAC algorithm reported here avoided the passages near the fire and spreading areas as much as possible and took the safety of the trapped persons as the premise when planning the evacuation route. Therefore, the IoT-based fire information system has important value for ensuring fire safety and carrying out emergency rescue and is worthy of popularization and application.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 93-109"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49866020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware—A New Open Access Journal 硬件——一个新的开放获取期刊
Virtual Reality Intelligent Hardware Pub Date : 2023-03-30 DOI: 10.3390/hardware1010001
Peter C. Hauser
{"title":"Hardware—A New Open Access Journal","authors":"Peter C. Hauser","doi":"10.3390/hardware1010001","DOIUrl":"https://doi.org/10.3390/hardware1010001","url":null,"abstract":"Hardware (ISSN 2813-6640) [...]","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"191 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85126224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信