增强现实环境下人工智能协同现场服装设计的两阶段图像生成补偿方法

Zhenjie Zhao, Xiaojuan Ma
{"title":"增强现实环境下人工智能协同现场服装设计的两阶段图像生成补偿方法","authors":"Zhenjie Zhao, Xiaojuan Ma","doi":"10.1109/AIVR.2018.00018","DOIUrl":null,"url":null,"abstract":"In this paper, we consider a human-AI collaboration task, fashion design, in augmented reality environment. In particular, we propose a compensation method of two-stage image generation neural network for generating fashion design with progressive users' inputs. Our work is based on a recent proposed deep learning model, pix2pix, that can successfully transform an image from one domain into another domain, such as from line drawings to color images. However, the pix2pix model relies on the condition that input images should come from the same distribution, which is usually hard for applying it to real humancomputer interaction tasks, where the input from users differs from individual to individual. To address the problem, we propose a compensation method of two-stage image generation. In the first stage, we ask users to indicate their design preference with an easy task, such as tuning clothing landmarks, and use the input to generate a compensation input. With the compensation input, in the second stage, we then concatenate it with the real sketch from users to generate a perceptual better result. In addition, to deploy the two-stage image generation neural network in augmented reality environment, we designed and implemented a mobile application where users can create fashion design referring to real world human models. With the augmented 2D screen and instant feedback from our system, users can design clothing by seamlessly mixing the real and virtual environment. Through an online experiment with 46 participants and an offline use case study, we showcase the capability and usability of our system. Finally, we discuss the limitations of our system and further works on human-AI collaborated design.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"A Compensation Method of Two-Stage Image Generation for Human-AI Collaborated In-Situ Fashion Design in Augmented Reality Environment\",\"authors\":\"Zhenjie Zhao, Xiaojuan Ma\",\"doi\":\"10.1109/AIVR.2018.00018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we consider a human-AI collaboration task, fashion design, in augmented reality environment. In particular, we propose a compensation method of two-stage image generation neural network for generating fashion design with progressive users' inputs. Our work is based on a recent proposed deep learning model, pix2pix, that can successfully transform an image from one domain into another domain, such as from line drawings to color images. However, the pix2pix model relies on the condition that input images should come from the same distribution, which is usually hard for applying it to real humancomputer interaction tasks, where the input from users differs from individual to individual. To address the problem, we propose a compensation method of two-stage image generation. In the first stage, we ask users to indicate their design preference with an easy task, such as tuning clothing landmarks, and use the input to generate a compensation input. With the compensation input, in the second stage, we then concatenate it with the real sketch from users to generate a perceptual better result. In addition, to deploy the two-stage image generation neural network in augmented reality environment, we designed and implemented a mobile application where users can create fashion design referring to real world human models. With the augmented 2D screen and instant feedback from our system, users can design clothing by seamlessly mixing the real and virtual environment. Through an online experiment with 46 participants and an offline use case study, we showcase the capability and usability of our system. Finally, we discuss the limitations of our system and further works on human-AI collaborated design.\",\"PeriodicalId\":371868,\"journal\":{\"name\":\"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIVR.2018.00018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIVR.2018.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

在本文中,我们考虑了一个人类与人工智能的协作任务,服装设计,在增强现实环境中。特别是,我们提出了一种两阶段图像生成神经网络的补偿方法,用于生成具有渐进式用户输入的服装设计。我们的工作基于最近提出的深度学习模型pix2pix,该模型可以成功地将图像从一个域转换为另一个域,例如从线条图转换为彩色图像。然而,pix2pix模型依赖于输入图像应该来自相同分布的条件,这通常很难将其应用于真正的人机交互任务,其中来自用户的输入因人而异。为了解决这个问题,我们提出了一种两阶段图像生成的补偿方法。在第一阶段,我们要求用户通过一个简单的任务来表明他们的设计偏好,比如调整服装标志,并使用输入来生成补偿输入。有了补偿输入,在第二阶段,我们将其与用户的真实草图连接起来,以产生更好的感知结果。此外,为了在增强现实环境中部署两阶段图像生成神经网络,我们设计并实现了一个移动应用程序,用户可以参考现实世界的人体模型创建时装设计。通过增强的2D屏幕和我们系统的即时反馈,用户可以通过无缝混合真实和虚拟环境来设计服装。通过一个有46名参与者的在线实验和一个离线用例研究,我们展示了我们系统的能力和可用性。最后,我们讨论了我们的系统的局限性和人类-人工智能协作设计的进一步工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Compensation Method of Two-Stage Image Generation for Human-AI Collaborated In-Situ Fashion Design in Augmented Reality Environment
In this paper, we consider a human-AI collaboration task, fashion design, in augmented reality environment. In particular, we propose a compensation method of two-stage image generation neural network for generating fashion design with progressive users' inputs. Our work is based on a recent proposed deep learning model, pix2pix, that can successfully transform an image from one domain into another domain, such as from line drawings to color images. However, the pix2pix model relies on the condition that input images should come from the same distribution, which is usually hard for applying it to real humancomputer interaction tasks, where the input from users differs from individual to individual. To address the problem, we propose a compensation method of two-stage image generation. In the first stage, we ask users to indicate their design preference with an easy task, such as tuning clothing landmarks, and use the input to generate a compensation input. With the compensation input, in the second stage, we then concatenate it with the real sketch from users to generate a perceptual better result. In addition, to deploy the two-stage image generation neural network in augmented reality environment, we designed and implemented a mobile application where users can create fashion design referring to real world human models. With the augmented 2D screen and instant feedback from our system, users can design clothing by seamlessly mixing the real and virtual environment. Through an online experiment with 46 participants and an offline use case study, we showcase the capability and usability of our system. Finally, we discuss the limitations of our system and further works on human-AI collaborated design.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信