Virtual Reality Intelligent Hardware最新文献

筛选
英文 中文
Publisher’s Note: Hardware—A New Open Access Journal 出版商注:硬件——一种新的开放获取期刊
Virtual Reality Intelligent Hardware Pub Date : 2023-03-30 DOI: 10.3390/hardware1010002
Liliane Auwerter
{"title":"Publisher’s Note: Hardware—A New Open Access Journal","authors":"Liliane Auwerter","doi":"10.3390/hardware1010002","DOIUrl":"https://doi.org/10.3390/hardware1010002","url":null,"abstract":"The development of new hardware has never been as accessible as it is today [...]","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74840471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transformer Architecture based mutual attention for Image Anomaly Detection 一种基于互感器结构的图像异常检测相互注意
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.07.006
Mengting Zhang, Xiuxia Tian
{"title":"A Transformer Architecture based mutual attention for Image Anomaly Detection","authors":"Mengting Zhang,&nbsp;Xiuxia Tian","doi":"10.1016/j.vrih.2022.07.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.006","url":null,"abstract":"<div><h3>Background</h3><p>Image anomaly detection is a popular task in computer graphics, which is widely used in industrial fields. Previous works that address this problem often train CNN-based (e.g. Auto-Encoder, GANs) models to reconstruct covered parts of input images and calculate the difference between the input and the reconstructed image. However, convolutional operations are good at extracting local features making it difficult to identify larger image anomalies. To this end, we propose a transformer architecture based on mutual attention for image anomaly separation. This architecture can capture long-term dependencies and fuse local features with global features to facilitate better image anomaly detection. Our method was extensively evaluated on several benchmarks, and experimental results showed that it improved detection capability by 3.1% and localization capability by 1.0% compared with state-of-the-art reconstruction-based methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 57-67"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View Interpolation Networks for Reproducing Material Appearance of Specular Objects 再现镜面反射物体材料外观的视图插值网络
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.11.001
Chihiro Hoshizawa, Takashi Komuro
{"title":"View Interpolation Networks for Reproducing Material Appearance of Specular Objects","authors":"Chihiro Hoshizawa,&nbsp;Takashi Komuro","doi":"10.1016/j.vrih.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.11.001","url":null,"abstract":"<div><p>In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 1-10"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing 虚拟虚拟社交中心在老年人社交距离中的应用
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.07.007
Hui Liang, Jiupeng Li, Yi Wang, Junjun Pan, Yazhou Zhang, Xiaohang Dong
{"title":"Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing","authors":"Hui Liang, Jiupeng Li, Yi Wang, Junjun Pan, Yazhou Zhang, Xiaohang Dong","doi":"10.1016/j.vrih.2022.07.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.007","url":null,"abstract":"","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"68 - 80"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55184258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Unrolling Rain-guided Detail Recovery Network for Single Image Deraining 推出用于单图像降阶的雨水引导细节恢复网络
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.06.002
Kailong Lin, Shaowei Zhang, Yu Luo, Jie Ling
{"title":"Unrolling Rain-guided Detail Recovery Network for Single Image Deraining","authors":"Kailong Lin,&nbsp;Shaowei Zhang,&nbsp;Yu Luo,&nbsp;Jie Ling","doi":"10.1016/j.vrih.2022.06.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.06.002","url":null,"abstract":"<div><p>Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 11-23"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IAACS: Image Aesthetic Assessment Through Color Composition And Space Formation IAACS:从色彩构成和空间构成看图像的审美评价
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.06.006
Bailin Yang , Changrui zhu , Frederick W.B. Li , Tianxiang Wei , Xiaohui Liang , Qingxu Wang
{"title":"IAACS: Image Aesthetic Assessment Through Color Composition And Space Formation","authors":"Bailin Yang ,&nbsp;Changrui zhu ,&nbsp;Frederick W.B. Li ,&nbsp;Tianxiang Wei ,&nbsp;Xiaohui Liang ,&nbsp;Qingxu Wang","doi":"10.1016/j.vrih.2022.06.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.06.006","url":null,"abstract":"<div><p>Judging how an image is visually appealing is a complicated and subjective task. This highly motivates having a machine learning model to automatically evaluate image aesthetic by matching the aesthetics of general public. Although deep learning methods have been successfully learning good visual features from images, correctly assessing image aesthetic quality is still challenging for deep learning. To tackle this, we propose a novel multi-view convolutional neural network to assess image aesthetic by analyzing image color composition and space formation (IAACS). Specifically, from different views of an image, including its key color components with their contributions, the image space formation and the image itself, our network extracts their corresponding features through our proposed feature extraction module (FET) and the ImageNet weight-based classification model. By fusing the extracted features, our network produces an accurate prediction score distribution of image aesthetic. Experiment results have shown that we have achieved a superior performance.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 42-56"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVAD: Content-Oriented Video Anomaly Detection using a Self-Attention based Deep Learning Model COVAD:使用基于自注意的深度学习模型的面向内容的视频异常检测
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.06.001
Wenhao Shao , Praboda Rajapaksha , Yanyan Wei , Dun Li , Noel Crespi , Zhigang Luo
{"title":"COVAD: Content-Oriented Video Anomaly Detection using a Self-Attention based Deep Learning Model","authors":"Wenhao Shao ,&nbsp;Praboda Rajapaksha ,&nbsp;Yanyan Wei ,&nbsp;Dun Li ,&nbsp;Noel Crespi ,&nbsp;Zhigang Luo","doi":"10.1016/j.vrih.2022.06.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.06.001","url":null,"abstract":"<div><h3>Background</h3><p>Video anomaly detection has always been a hot topic and attracting an increasing amount of attention. Much of the existing methods on video anomaly detection depend on processing the entire video rather than considering only the significant context. This paper proposes a novel video anomaly detection method named COVAD, which mainly focuses on the region of interest in the video instead of the entire video. Our proposed COVAD method is based on an auto-encoded convolutional neural network and coordinated attention mechanism, which can effectively capture meaningful objects in the video and dependencies between different objects. Relying on the existing memory-guided video frame prediction network, our algorithm can more effectively predict the future motion and appearance of objects in the video. Our proposed algorithm obtained better experimental results on multiple data sets and outperformed the baseline models considered in our analysis. At the same time we improve a visual test that can provide pixel-level anomaly explanations.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 24-41"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Point Cloud Upsampling Adversarial Network Based on Residual Multi-Scale Off-Set Attention 基于残差多尺度偏移注意力的点云上采样对抗网络
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.08.016
Bin Shen , Li Li , Xinrong Hu , Shengyi Guo , Jin Huang , Zhiyao Liang
{"title":"A Point Cloud Upsampling Adversarial Network Based on Residual Multi-Scale Off-Set Attention","authors":"Bin Shen ,&nbsp;Li Li ,&nbsp;Xinrong Hu ,&nbsp;Shengyi Guo ,&nbsp;Jin Huang ,&nbsp;Zhiyao Liang","doi":"10.1016/j.vrih.2022.08.016","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.016","url":null,"abstract":"<div><p>Due to the limitation of the working principle of 3D scanning equipment, the point cloud obtained by 3D scanning is usually sparse and unevenly distributed. In this paper, we propose a new Generative Adversarial Network(GAN) for point cloud upsampling, which is extended from PU-GAN. Its core architecture is to replace the traditional Self-Attention (SA) module with the implicit Laplacian Off-Set Attention(OA) module, and adjacency features are aggregated using the Multi-Scale Off-Set Attention(MSOA) module, which adaptively adjusts the receptive field to learn various structural features. Finally, Residual links were added to form our Residual Multi-Scale Off-Set Attention (RMSOA) module, which utilized multi-scale structural relationships to generate finer details. A large number of experiments show that the performance of our method is superior to the existing methods, and our model has high robustness.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 81-91"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49868174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing 虚拟虚拟社交中心在老年人社交距离中的应用
Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.07.007
Hui Liang , Jiupeng Li , Yi Wang , Junjun Pan , Yazhou Zhang , Xiaohang Dong
{"title":"Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing","authors":"Hui Liang ,&nbsp;Jiupeng Li ,&nbsp;Yi Wang ,&nbsp;Junjun Pan ,&nbsp;Yazhou Zhang ,&nbsp;Xiaohang Dong","doi":"10.1016/j.vrih.2022.07.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.007","url":null,"abstract":"<div><p>The lack of social activities in the elderly for physical reasons can make them feel lonely and prone to depression. With the spread of COVID-19, it is difficult for the elderly to conduct the few social activities stably, causing the elderly to be more lonely. The metaverse is a virtual world that mirrors reality. It allows the elderly to get rid of the constraints of reality and perform social activities stably and continuously, providing new ideas for alleviating the loneliness of the elderly. Through the analysis of the needs of the elderly, a virtual social center framework for the elderly was proposed in this study. Besides, a prototype system was designed according to the framework. The elderly can socialize in virtual reality with metaverse-related technologies and human-computer interaction tools. Additionally, a test was jointly conducted with the chief physician of the geriatric rehabilitation department of a tertiary hospital. The results demonstrated that the mental state of the elderly who had used the virtual social center was significantly better than that of the elderly who had not used it. Thus, virtual social centers alleviated loneliness and depression in older adults. Virtual social centers can help the elderly relieve loneliness and depression when the global epidemic is normalizing and the population is aging. Hence, they have promotion value</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 68-80"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49868173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Building the metaverse using digital twins at all scales,states, and relations 使用各种规模、状态和关系的数字双胞胎构建元世界
Virtual Reality Intelligent Hardware Pub Date : 2022-12-01 DOI: 10.1016/j.vrih.2022.06.005
Zhihan Lv , Shuxuan Xie , Yuxi Li , M. Shamim Hossain , Abdulmotaleb El Saddik
{"title":"Building the metaverse using digital twins at all scales,states, and relations","authors":"Zhihan Lv ,&nbsp;Shuxuan Xie ,&nbsp;Yuxi Li ,&nbsp;M. Shamim Hossain ,&nbsp;Abdulmotaleb El Saddik","doi":"10.1016/j.vrih.2022.06.005","DOIUrl":"10.1016/j.vrih.2022.06.005","url":null,"abstract":"<div><p>Developments in new-generation information technology have enabled Digital Twins to reshape the physical world into a virtual digital space and provide technical support for constructing the Metaverse. Metaverse objects can be at the micro-, meso-, or macroscale. The Metaverse is a complex collection of solid, liquid, gaseous, plasma, and other uncertain states. Additionally, the Metaverse integrates tangibles with social relations, such as interpersonal (friends, partners, and family) and social relations (ethics, morality, and law). This review introduces some principles and laws, such as broken windows theory, small-world phenomenon, survivor bias, and herd behavior, for constructing a Digital Twins model for social relations. Therefore, from multiple perspectives, this article reviews mappings of tangible and intangible real-world objects to the Metaverse using the Digital Twins model.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 6","pages":"Pages 459-470"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000602/pdf?md5=a21672be799f10764afb283c622bf66e&pid=1-s2.0-S2096579622000602-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125427534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信