{"title":"三维世界远程呈现与相机跟踪的手势交互","authors":"E. Champion, Li Qiang, Demetrius Lacet, A. Dekker","doi":"10.2312/gch.20161394","DOIUrl":null,"url":null,"abstract":"While many education institutes use Skype, Google Chat or other commercial video-conferencing applications, these applications are not suitable for presenting architectural or urban design or archaeological information, as they don't integrate the presenter with interactive 3D media. Nor do they allow spatial or component-based interaction controlled by the presenter in a natural and intuitive manner, without needing to sit or stoop over a mouse or keyboard. A third feature that would be very useful is to mirror the presenter's gestures and actions so that the presenter does not have to try to face both audience and screen. \n \nTo meet these demands we developed a prototype camera-tracking application using a Kinect camera sensor and multi-camera Unity windows for teleconferencing that required the presentation of interactive 3D content along with the speaker (or an avatar that mirrored the gestures of the speaker). Cheaply available commercial software and hardware but coupled with a large display screen (in this case an 8 meter wide curved screen) allows participants to have their gestures, movements and group behavior fed into the virtual environment either directly or indirectly. Allowing speakers to present 3D virtual worlds remotely located audiences while appearing to be inside virtual worlds has immediate practical uses for teaching and long-distance collaboration.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"3D in-world Telepresence With Camera-Tracked Gestural Interaction\",\"authors\":\"E. Champion, Li Qiang, Demetrius Lacet, A. Dekker\",\"doi\":\"10.2312/gch.20161394\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While many education institutes use Skype, Google Chat or other commercial video-conferencing applications, these applications are not suitable for presenting architectural or urban design or archaeological information, as they don't integrate the presenter with interactive 3D media. Nor do they allow spatial or component-based interaction controlled by the presenter in a natural and intuitive manner, without needing to sit or stoop over a mouse or keyboard. A third feature that would be very useful is to mirror the presenter's gestures and actions so that the presenter does not have to try to face both audience and screen. \\n \\nTo meet these demands we developed a prototype camera-tracking application using a Kinect camera sensor and multi-camera Unity windows for teleconferencing that required the presentation of interactive 3D content along with the speaker (or an avatar that mirrored the gestures of the speaker). Cheaply available commercial software and hardware but coupled with a large display screen (in this case an 8 meter wide curved screen) allows participants to have their gestures, movements and group behavior fed into the virtual environment either directly or indirectly. Allowing speakers to present 3D virtual worlds remotely located audiences while appearing to be inside virtual worlds has immediate practical uses for teaching and long-distance collaboration.\",\"PeriodicalId\":203827,\"journal\":{\"name\":\"Eurographics Workshop on Graphics and Cultural Heritage\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Eurographics Workshop on Graphics and Cultural Heritage\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2312/gch.20161394\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eurographics Workshop on Graphics and Cultural Heritage","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2312/gch.20161394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
3D in-world Telepresence With Camera-Tracked Gestural Interaction
While many education institutes use Skype, Google Chat or other commercial video-conferencing applications, these applications are not suitable for presenting architectural or urban design or archaeological information, as they don't integrate the presenter with interactive 3D media. Nor do they allow spatial or component-based interaction controlled by the presenter in a natural and intuitive manner, without needing to sit or stoop over a mouse or keyboard. A third feature that would be very useful is to mirror the presenter's gestures and actions so that the presenter does not have to try to face both audience and screen.
To meet these demands we developed a prototype camera-tracking application using a Kinect camera sensor and multi-camera Unity windows for teleconferencing that required the presentation of interactive 3D content along with the speaker (or an avatar that mirrored the gestures of the speaker). Cheaply available commercial software and hardware but coupled with a large display screen (in this case an 8 meter wide curved screen) allows participants to have their gestures, movements and group behavior fed into the virtual environment either directly or indirectly. Allowing speakers to present 3D virtual worlds remotely located audiences while appearing to be inside virtual worlds has immediate practical uses for teaching and long-distance collaboration.