Lorenzo Stacchio , Alessia Angeli , Gustavo Marfia
{"title":"Empowering digital twins with eXtended reality collaborations","authors":"Lorenzo Stacchio , Alessia Angeli , Gustavo Marfia","doi":"10.1016/j.vrih.2022.06.004","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>The advancements of Artificial Intelligence, Big Data Analytics, and the Internet of Things paved the path to the emergence and use of Digital Twins (DTs) as technologies to “twin” the life of a physical entity in different fields, ranging from industry to healthcare. At the same time, the advent of eXtended Reality (XR) in industrial and consumer electronics has provided novel paradigms that may be put to good use to visualize and interact with DTs. XR technologies can support human-to-human interactions for training and remote assistance and could transform DTs into collaborative intelligence tools.</p></div><div><h3>Methods</h3><p>We here present the Human Collaborative Intelligence empowered Digital Twin framework (HCLINT-DT) integrating human annotations (e.g., textual and vocal) to allow the creation of an all-in-one-place resource to preserve such knowledge. This framework could be adopted in many fields, supporting users to learn how to carry out an unknown process or explore others’ past experiences.</p></div><div><h3>Results</h3><p>The assessment of such a framework has involved implementing a DT supporting human annotations, reflected in both the physical world (Augmented Reality) and the virtual one (Virtual Reality).</p></div><div><h3>Conclusions</h3><p>The outcomes of the interface design assessment confirm the interest in developing HCLINT-DT-based applications. Finally, we evaluated how the proposed framework could be translated into a manufacturing context.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 6","pages":"Pages 487-505"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000596/pdf?md5=dbbbd385704b8ee92e82ea24028f0302&pid=1-s2.0-S2096579622000596-main.pdf","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579622000596","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 5
Abstract
Background
The advancements of Artificial Intelligence, Big Data Analytics, and the Internet of Things paved the path to the emergence and use of Digital Twins (DTs) as technologies to “twin” the life of a physical entity in different fields, ranging from industry to healthcare. At the same time, the advent of eXtended Reality (XR) in industrial and consumer electronics has provided novel paradigms that may be put to good use to visualize and interact with DTs. XR technologies can support human-to-human interactions for training and remote assistance and could transform DTs into collaborative intelligence tools.
Methods
We here present the Human Collaborative Intelligence empowered Digital Twin framework (HCLINT-DT) integrating human annotations (e.g., textual and vocal) to allow the creation of an all-in-one-place resource to preserve such knowledge. This framework could be adopted in many fields, supporting users to learn how to carry out an unknown process or explore others’ past experiences.
Results
The assessment of such a framework has involved implementing a DT supporting human annotations, reflected in both the physical world (Augmented Reality) and the virtual one (Virtual Reality).
Conclusions
The outcomes of the interface design assessment confirm the interest in developing HCLINT-DT-based applications. Finally, we evaluated how the proposed framework could be translated into a manufacturing context.