Anna Sheremetieva, Ihor Romanovych, Sam Frish, M. Maksymenko, Orestis Georgiou
{"title":"我的未来是什么:一个多感官和多模式的数字人类代理互动体验","authors":"Anna Sheremetieva, Ihor Romanovych, Sam Frish, M. Maksymenko, Orestis Georgiou","doi":"10.1145/3573381.3596161","DOIUrl":null,"url":null,"abstract":"This paper describes an interactive multimodal and multisensory fortune-telling experience for digital signage applications that combines digital human agents along with touchless haptic technology and gesture recognition. For the first time, human-to-digital human interaction is mediated through hand gesture input and mid-air haptic feedback, motivating further research into multimodal and multisensory location-based experiences using these and related technologies. We take a phenomenological approach and present our design process, the system architecture, and discuss our gained insights, along with some of the challenges and opportunities we have encountered during this exercise. Finally, we use our singular implementation as a paradigm as a proxy for discussing complex aspects such as privacy, consent, gender neutrality, and the use of digital non-fungible tokens at the phygital border of the metaverse.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"82 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"What’s my future: a Multisensory and Multimodal Digital Human Agent Interactive Experience\",\"authors\":\"Anna Sheremetieva, Ihor Romanovych, Sam Frish, M. Maksymenko, Orestis Georgiou\",\"doi\":\"10.1145/3573381.3596161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes an interactive multimodal and multisensory fortune-telling experience for digital signage applications that combines digital human agents along with touchless haptic technology and gesture recognition. For the first time, human-to-digital human interaction is mediated through hand gesture input and mid-air haptic feedback, motivating further research into multimodal and multisensory location-based experiences using these and related technologies. We take a phenomenological approach and present our design process, the system architecture, and discuss our gained insights, along with some of the challenges and opportunities we have encountered during this exercise. Finally, we use our singular implementation as a paradigm as a proxy for discussing complex aspects such as privacy, consent, gender neutrality, and the use of digital non-fungible tokens at the phygital border of the metaverse.\",\"PeriodicalId\":120872,\"journal\":{\"name\":\"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences\",\"volume\":\"82 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3573381.3596161\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573381.3596161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
What’s my future: a Multisensory and Multimodal Digital Human Agent Interactive Experience
This paper describes an interactive multimodal and multisensory fortune-telling experience for digital signage applications that combines digital human agents along with touchless haptic technology and gesture recognition. For the first time, human-to-digital human interaction is mediated through hand gesture input and mid-air haptic feedback, motivating further research into multimodal and multisensory location-based experiences using these and related technologies. We take a phenomenological approach and present our design process, the system architecture, and discuss our gained insights, along with some of the challenges and opportunities we have encountered during this exercise. Finally, we use our singular implementation as a paradigm as a proxy for discussing complex aspects such as privacy, consent, gender neutrality, and the use of digital non-fungible tokens at the phygital border of the metaverse.