Vladislav Li, B. Villarini, Jean-Christophe Nebel, Argyriou Vasileios
{"title":"A Modular Deep Learning Framework for Scene Understanding in Augmented Reality Applications","authors":"Vladislav Li, B. Villarini, Jean-Christophe Nebel, Argyriou Vasileios","doi":"10.1109/IAICT59002.2023.10205667","DOIUrl":null,"url":null,"abstract":"Taking as input natural images and videos, augmented reality (AR) applications aim to enhance the real world with superimposed digital contents, enabling interaction between the user and the environment. One important step in this process is automatic scene analysis and understanding, which should be performed both in real time and with a good level of object recognition accuracy. In this work, an end-to-end framework based on the combination of a Super Resolution network with a detection and recognition deep network has been proposed to increase performance and lower processing time. This novel approach has been evaluated on two different datasets: the popular COCO dataset, whose real images are used for benchmarking many different computer vision tasks, and a generated dataset with synthetic images recreating a variety of environmental, lighting, and acquisition conditions. The evaluation analysis is focused on small objects, which are more challenging to correctly detect and recognise. The results show that the Average Precision is higher for small and low-resolution objects for the proposed end-to-end approach in most of the selected conditions.","PeriodicalId":339796,"journal":{"name":"2023 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAICT59002.2023.10205667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Taking as input natural images and videos, augmented reality (AR) applications aim to enhance the real world with superimposed digital contents, enabling interaction between the user and the environment. One important step in this process is automatic scene analysis and understanding, which should be performed both in real time and with a good level of object recognition accuracy. In this work, an end-to-end framework based on the combination of a Super Resolution network with a detection and recognition deep network has been proposed to increase performance and lower processing time. This novel approach has been evaluated on two different datasets: the popular COCO dataset, whose real images are used for benchmarking many different computer vision tasks, and a generated dataset with synthetic images recreating a variety of environmental, lighting, and acquisition conditions. The evaluation analysis is focused on small objects, which are more challenging to correctly detect and recognise. The results show that the Average Precision is higher for small and low-resolution objects for the proposed end-to-end approach in most of the selected conditions.