Virtual Reality Intelligent Hardware最新文献

筛选
英文 中文
Outliers rejection in similar image matching 相似图像匹配中的异常值抑制
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2023.02.004
Qingqing Chen , Junfeng Yao
{"title":"Outliers rejection in similar image matching","authors":"Qingqing Chen ,&nbsp;Junfeng Yao","doi":"10.1016/j.vrih.2023.02.004","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.02.004","url":null,"abstract":"<div><h3>Background</h3><p>Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping. The accuracy of the matching significantly impacted subsequent studies. Because of their local similarity, when image pairs contain comparable patterns but feature pairs are positioned differently, incorrect recognition can occur as global motion consistency is disregarded.</p></div><div><h3>Methods</h3><p>This study proposes an image-matching filtering algorithm based on global motion consistency. It can be used as a subsequent matching filter for the initial matching results generated by other matching algorithms based on the principle of motion smoothness. A particular matching algorithm can first be used to perform the initial matching; then, the rotation and movement information of the global feature vectors are combined to effectively identify outlier matches. The principle is that if the matching result is accurate, the feature vectors formed by any matched point should have similar rotation angles and moving distances. Thus, global motion direction and global motion distance consistencies were used to reject outliers caused by similar patterns in different locations.</p></div><div><h3>Results</h3><p>Four datasets were used to test the effectiveness of the proposed method. Three datasets with similar patterns in different locations were used to test the results for similar images that could easily be incorrectly matched by other algorithms, and one commonly used dataset was used to test the results for the general image-matching problem. The experimental results suggest that the proposed method is more accurate than other state-of-the-art algorithms in identifying mismatches in the initial matching set.</p></div><div><h3>Conclusions</h3><p>The proposed outlier rejection matching method can significantly improve the matching accuracy for similar images with locally similar feature pairs in different locations and can provide more accurate matching results for subsequent computer vision tasks.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 171-187"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49866057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive spatio-temporal attention neural network for crossdatabase micro-expression recognition 跨数据库微表情识别的自适应时空注意神经网络
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.03.006
Yuhan RAN
{"title":"Adaptive spatio-temporal attention neural network for crossdatabase micro-expression recognition","authors":"Yuhan RAN","doi":"10.1016/j.vrih.2022.03.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.03.006","url":null,"abstract":"<div><h3>Background</h3><p>The use of micro-expression recognition to recognize human emotions is one of the most critical challenges in human-computer interaction applications. In recent years, cross-database micro-expression recognition (CDMER) has emerged as a significant challenge in micro-expression recognition and analysis. Because the training and testing data in CDMER come from different micro-expression databases, CDMER is more challenging than conventional micro-expression recognition.</p></div><div><h3>Methods</h3><p>In this paper, an adaptive spatio-temporal attention neural network (ASTANN) using an attention mechanism is presented to address this challenge. To this end, the micro-expression databases SMIC and CASME II are first preprocessed using an optical flow approach, which extracts motion information among video frames that represent discriminative features of micro-expression. After preprocessing, a novel adaptive framework with a spatiotemporal attention module was designed to assign spatial and temporal weights to enhance the most discriminative features. The deep neural network then extracts the cross-domain feature, in which the second-order statistics of the sample features in the source domain are aligned with those in the target domain by minimizing the correlation alignment (CORAL) loss such that the source and target databases share similar distributions.</p></div><div><h3>Results</h3><p>To evaluate the performance of ASTANN, experiments were conducted based on the SMIC and CASME II databases under the standard experimental evaluation protocol of CDMER. The experimental results demonstrate that ASTANN outperformed other methods in relevant crossdatabase tasks.</p></div><div><h3>Conclusions</h3><p>Extensive experiments were conducted on benchmark tasks, and the results show that ASTANN has superior performance compared with other approaches. This demonstrates the superiority of our method in solving the CDMER problem.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 142-156"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Web-based Mixed Reality Video Fusion with Remote Rendering 基于web的混合现实视频融合与远程渲染
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.03.005
Qiang Zhou, Zhong Zhou
{"title":"Web-based Mixed Reality Video Fusion with Remote Rendering","authors":"Qiang Zhou,&nbsp;Zhong Zhou","doi":"10.1016/j.vrih.2022.03.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.03.005","url":null,"abstract":"<div><p>Mixed Reality (MR) video fusion system fuses video imagery with 3D scenes. It makes the scene much more realistic and helps the users understand the video contents and temporalspatial correlation between them, thus reducing the user’s cognitive load. Nowadays, MR video fusion has been used in various applications. However, video fusion systems require powerful client machines because video streaming delivery, stitching, and rendering are computation-intensive. Moreover, huge bandwidth usage is also another critical factor that affects the scalability of video fusion systems. The framework proposed in this paper overcomes this client limitation by utilizing remote rendering. Furthermore, the framework we built is based on browsers. Therefore, the user could try the MR video fusion system with a laptop or even pad, no extra plug-ins or application programs need to be installed. Several experiments on diverse metrics demonstrate the effectiveness of the proposed framework.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 188-199"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression of Surface Texture Acceleration Signal Based on Spectrum Characteristics 基于频谱特征的表面纹理加速信号压缩
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.01.006
Dongyan Nie , Xiaoying Sun
{"title":"Compression of Surface Texture Acceleration Signal Based on Spectrum Characteristics","authors":"Dongyan Nie ,&nbsp;Xiaoying Sun","doi":"10.1016/j.vrih.2022.01.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.01.006","url":null,"abstract":"<div><h3>Background</h3><p>Adequate-data collection could enhance the realism of surface texture haptic online-rendering or offline-playback. A parallel challenge is how to reduce communication delays and improve storage space utilization.</p></div><div><h3>Methods</h3><p>Based on the similarity of the short-term amplitude spectrumtrend, this paper proposes a frequency-domain compression method. A compression framework is designed, firstly to map the amplitude spectrum into a trend similarity grayscale image, compress it with the stillpicture-compression method, and then to adaptively encode the maximum amplitude and part of the initial phase of each time-window, achieving the final compression.</p></div><div><h3>Results</h3><p>The comparison between the original signal and the recovered signal shows that when the time-frequency similarity is 90%, the average compression ratio of our method is 9.85% in the case of a single interact point. The subjective score for the similarity reached an excellent level, with an average score of 87.85.</p></div><div><h3>Conclusions</h3><p>Our method can be used for offline compression of vibrotactile data. For the case of multi-interact points in space, the trend similarity grayscale image can be reused, and the compression ratio is further reduced.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 110-123"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSSTNet: Multi-scale facial videos pulse extraction network based on separable spatiotemporal convolution and dimension separable attention 基于可分时空卷积和维数可分注意力的多尺度面部视频脉冲提取网络
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.07.001
Changchen Zhao , Hongsheng Wang , Yuanjing Feng
{"title":"MSSTNet: Multi-scale facial videos pulse extraction network based on separable spatiotemporal convolution and dimension separable attention","authors":"Changchen Zhao ,&nbsp;Hongsheng Wang ,&nbsp;Yuanjing Feng","doi":"10.1016/j.vrih.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.001","url":null,"abstract":"<div><h3>Background</h3><p>Using remote photoplethysmography (rPPG) to estimate blood volume pulse in a non-contact way is an active research topic in recent years. Existing methods are mainly based on the single-scale region of interest (ROI). However, some noise signals that are not easily separated in single-scale space can be easily separated in multi-scale space. In addition, existing spatiotemporal networks mainly focus on local spatiotemporal information and lack emphasis on temporal information which is crucial in pulse extraction problems, resulting in insufficient spatiotemporal feature modeling.</p></div><div><h3>Methods</h3><p>This paper proposes a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution and dimension separable attention. First, in order to solve the problem of single-scale ROI, we construct a multi-scale feature space for initial signal separation. Secondly, separable spatiotemporal convolution and dimension separable attention are designed for efficient spatiotemporal correlation modeling, which increases the information interaction between long-span time and space dimensions and puts more emphasis on temporal features.</p></div><div><h3>Results</h3><p>The signal-to-noise ratio (SNR) of the proposed network reaches 9.58 dB on the PURE dataset and 6.77 dB on the UBFC-rPPG dataset, which outperforms state-of-the-art algorithms.</p></div><div><h3>Conclusions</h3><p>Results show that fusing multi-scale signals generally obtains better results than methods based on the only single-scale signal. The proposed separable spatiotemporal convolution and dimension separable attention mechanism contributes to more accurate pulse signal extraction.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 124-141"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intelligent Fire Information System Based on 3D GIS 基于三维GIS的智能火灾信息系统
Virtual Reality Intelligent Hardware Pub Date : 2023-04-01 DOI: 10.1016/j.vrih.2022.07.002
Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Dongmei Yan
{"title":"Intelligent Fire Information System Based on 3D GIS","authors":"Jinxing Hu ,&nbsp;Zhihan Lv ,&nbsp;Diping Yuan ,&nbsp;Bing He ,&nbsp;Dongmei Yan","doi":"10.1016/j.vrih.2022.07.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.002","url":null,"abstract":"<div><p>This work aims to build a comprehensive and effective fire emergency management system based on the Internet of Things (IoT) and achieve an actual intelligent fire rescue. A smart fire protection information system was designed based on the IoT. A detailed analysis was conducted on the problem of rescue vehicle scheduling and the evacuation of trapped persons in the process of fire rescue. The intelligent fire visualization platform based on the three-dimensional (3D) Geographic Information Science (GIS) covers project overview, equipment status, equipment classification, equipment alarm information, alarm classification, alarm statistics, equipment account information, and other modules. The live video accessed through the visual interface can clearly identify the stage of the fire, which facilitates the arrangement of rescue equipment and personnel. The vehicle scheduling model in the system primarily used two objective functions to solve the Pareto Non-Dominated Solution Set Optimization: emergency rescue time and the number of vehicles. In addition, an evacuation path optimization method based on the Improved Ant Colony (IAC) algorithm was designed to realize the dynamic optimization of building fire evacuation paths. The experimental results indicate that all the values of detection signals were significantly larger in the smoldering fire scene at t = 17s than the initial value. In addition, the probability of smoldering fire and the probability of open fire were relatively large according to the probability function of the corresponding fire situation, demonstrating that this model could detect fire. The IAC algorithm reported here avoided the passages near the fire and spreading areas as much as possible and took the safety of the trapped persons as the premise when planning the evacuation route. Therefore, the IoT-based fire information system has important value for ensuring fire safety and carrying out emergency rescue and is worthy of popularization and application.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 93-109"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49866020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware—A New Open Access Journal 硬件——一个新的开放获取期刊
Virtual Reality Intelligent Hardware Pub Date : 2023-03-30 DOI: 10.3390/hardware1010001
Peter C. Hauser
{"title":"Hardware—A New Open Access Journal","authors":"Peter C. Hauser","doi":"10.3390/hardware1010001","DOIUrl":"https://doi.org/10.3390/hardware1010001","url":null,"abstract":"Hardware (ISSN 2813-6640) [...]","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"191 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85126224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Publisher’s Note: Hardware—A New Open Access Journal 出版商注:硬件——一种新的开放获取期刊
Virtual Reality Intelligent Hardware Pub Date : 2023-03-30 DOI: 10.3390/hardware1010002
Liliane Auwerter
{"title":"Publisher’s Note: Hardware—A New Open Access Journal","authors":"Liliane Auwerter","doi":"10.3390/hardware1010002","DOIUrl":"https://doi.org/10.3390/hardware1010002","url":null,"abstract":"The development of new hardware has never been as accessible as it is today [...]","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74840471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transformer Architecture based mutual attention for Image Anomaly Detection 一种基于互感器结构的图像异常检测相互注意
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.07.006
Mengting Zhang, Xiuxia Tian
{"title":"A Transformer Architecture based mutual attention for Image Anomaly Detection","authors":"Mengting Zhang,&nbsp;Xiuxia Tian","doi":"10.1016/j.vrih.2022.07.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.006","url":null,"abstract":"<div><h3>Background</h3><p>Image anomaly detection is a popular task in computer graphics, which is widely used in industrial fields. Previous works that address this problem often train CNN-based (e.g. Auto-Encoder, GANs) models to reconstruct covered parts of input images and calculate the difference between the input and the reconstructed image. However, convolutional operations are good at extracting local features making it difficult to identify larger image anomalies. To this end, we propose a transformer architecture based on mutual attention for image anomaly separation. This architecture can capture long-term dependencies and fuse local features with global features to facilitate better image anomaly detection. Our method was extensively evaluated on several benchmarks, and experimental results showed that it improved detection capability by 3.1% and localization capability by 1.0% compared with state-of-the-art reconstruction-based methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 57-67"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View Interpolation Networks for Reproducing Material Appearance of Specular Objects 再现镜面反射物体材料外观的视图插值网络
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.11.001
Chihiro Hoshizawa, Takashi Komuro
{"title":"View Interpolation Networks for Reproducing Material Appearance of Specular Objects","authors":"Chihiro Hoshizawa,&nbsp;Takashi Komuro","doi":"10.1016/j.vrih.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.11.001","url":null,"abstract":"<div><p>In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 1-10"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信