{"title":"增强现实辅助视频会议的视觉调节","authors":"O. Guleryuz, T. Kalker","doi":"10.1109/MMSP.2012.6343418","DOIUrl":null,"url":null,"abstract":"Typical video conferencing scenarios bring together individuals from disparate environments. Unless one commits to expensive tele-presence rooms, conferences involving many individuals result in a cacophony of visuals and backgrounds. Ideally one would like to separate participant visuals from their respective environments and render them over visually pleasing backgrounds that enhance immersion for all. Yet available image/video segmentation techniques are limited and result in significant artifacts even with recently popular commodity depth sensors. In this paper we present a technique that accomplishes robust and visually pleasing rendering of segmented participants over adaptively-designed virtual backgrounds. Our method works by determining virtual backgrounds that match and highlight participant visuals and uses directional textures to hide segmentation artifacts due to noisy segmentation boundaries, missing regions, etc. Taking advantage of simple computations and look-up-tables, our work leads to fast, real-time implementations that can run on mobile and other computationally-limited platforms.","PeriodicalId":325274,"journal":{"name":"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Visual conditioning for augmented-reality-assisted video conferencing\",\"authors\":\"O. Guleryuz, T. Kalker\",\"doi\":\"10.1109/MMSP.2012.6343418\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Typical video conferencing scenarios bring together individuals from disparate environments. Unless one commits to expensive tele-presence rooms, conferences involving many individuals result in a cacophony of visuals and backgrounds. Ideally one would like to separate participant visuals from their respective environments and render them over visually pleasing backgrounds that enhance immersion for all. Yet available image/video segmentation techniques are limited and result in significant artifacts even with recently popular commodity depth sensors. In this paper we present a technique that accomplishes robust and visually pleasing rendering of segmented participants over adaptively-designed virtual backgrounds. Our method works by determining virtual backgrounds that match and highlight participant visuals and uses directional textures to hide segmentation artifacts due to noisy segmentation boundaries, missing regions, etc. Taking advantage of simple computations and look-up-tables, our work leads to fast, real-time implementations that can run on mobile and other computationally-limited platforms.\",\"PeriodicalId\":325274,\"journal\":{\"name\":\"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-12-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MMSP.2012.6343418\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2012.6343418","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visual conditioning for augmented-reality-assisted video conferencing
Typical video conferencing scenarios bring together individuals from disparate environments. Unless one commits to expensive tele-presence rooms, conferences involving many individuals result in a cacophony of visuals and backgrounds. Ideally one would like to separate participant visuals from their respective environments and render them over visually pleasing backgrounds that enhance immersion for all. Yet available image/video segmentation techniques are limited and result in significant artifacts even with recently popular commodity depth sensors. In this paper we present a technique that accomplishes robust and visually pleasing rendering of segmented participants over adaptively-designed virtual backgrounds. Our method works by determining virtual backgrounds that match and highlight participant visuals and uses directional textures to hide segmentation artifacts due to noisy segmentation boundaries, missing regions, etc. Taking advantage of simple computations and look-up-tables, our work leads to fast, real-time implementations that can run on mobile and other computationally-limited platforms.