{"title":"Camouflage via Coevolution of Predator and Prey","authors":"C. Reynolds","doi":"10.1145/3588028.3603663","DOIUrl":"https://doi.org/10.1145/3588028.3603663","url":null,"abstract":"","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114984740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metro Re-illustrated: Incremental Generation of Stylized Paintings Using Neural Networks","authors":"Bo Shui, Chufan Shi, Xiaomei Nie","doi":"10.1145/3588028.3603658","DOIUrl":"https://doi.org/10.1145/3588028.3603658","url":null,"abstract":"Metro Re-illustrated is a project that explores incremental generation of stylized paintings of city metro maps using neural networks. It begins with an interactive system for labeling time-series data on city metro maps and generating reference images. These images are fed into a neural painter that incrementally generates oil painting-like strokes on virtual canvases. The generated paintings demonstrate blending and layering features of oil paintings while capturing the progressive nature of urban development.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123512125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Human-like Locomotion Based on Biological Actuation and Rewards","authors":"Minkwan Kim, Yoonsang Lee","doi":"10.1145/3588028.3603646","DOIUrl":"https://doi.org/10.1145/3588028.3603646","url":null,"abstract":"We propose a method of learning a policy for human-like locomotion via deep reinforcement learning based on a human anatomical model, muscle actuation, and biologically inspired rewards, without any inherent control rules or reference motions. Our main ideas involve providing a dense reward using metabolic energy consumption at every step during the initial stages of learning and then transitioning to a sparse reward as learning progresses, and adjusting the initial posture of the human model to facilitate the exploration of locomotion. Additionally, we compared and analyzed differences in learning outcomes across various settings other than the proposed method.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128937758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Multiple-Display Interaction for Live XR Performance","authors":"Ke-Fan Lin, Yu-Chih Chou, Yuanyuan Weng, Yvone Tsai Chen, Tse-Yu Pan, Ping-Hsuan Han","doi":"10.1145/3588028.3603661","DOIUrl":"https://doi.org/10.1145/3588028.3603661","url":null,"abstract":"Although VR concerts offer a unique experience for audience to watch a performance, people still tend to participate in a live performance physically for the co-presence and shared experience are difficult to perceive in VR. To address this issue, we propose Actualities, a live XR performance system that integrates onsite and online concerts to create a seamless experience across multiple displays. Our system utilizes various sensors to detect signals from musical instruments and onsite audiences, digitalizing onsite performance elements into a virtual world. We project the visuals onto screens and live-stream the content for audiences to watch through various devices, and we also designed several interactive elements for the audience to interact with the public display. To evaluate our system, we conducted exploratory research to help us refine our system and improve the cross-reality experience.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124210482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Updating Human Pose Estimation using Event-based Camera to Improve Its Accuracy","authors":"Ippei Otake, Kazuya Kitano, Takahiro Kushida, Hiroyuki Kubo, Akinobu Maejima, Yuki Fujimura, Takuya Funatomi, Y. Mukaigawa","doi":"10.1145/3588028.3603683","DOIUrl":"https://doi.org/10.1145/3588028.3603683","url":null,"abstract":"Ippei Otake Kazuya Kitano Takahiro Kushida otake.ippei.oj2@is.naist.jp kitano.kazuya@is.naist.jp kushida.takahiro.kh3@is.naist.jp NAIST Japan Hiroyuki Kubo hkubo@chiba-u.jp Chiba University Japan Akinobu Maejima akinobu.maejima@olm.co.jp OLM Digital, Inc. IMAGICA GROUP, Inc. Japan Yuki Fujimura Takuya Funatomi Yasuhiro Mukaigawa fujimura.yuki@is.naist.jp funatomi@is.naist.jp mukaigawa@is.naist.jp NAIST Japan","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128319453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayame Uchida, Izumi Tsunokuni, Y. Ikeda, Yasuhiro Oikawa
{"title":"Mixed Reality Visualization of Room Impulse Response Map using Room Geometry and Physical Model of Sound Propagation","authors":"Ayame Uchida, Izumi Tsunokuni, Y. Ikeda, Yasuhiro Oikawa","doi":"10.1145/3588028.3603693","DOIUrl":"https://doi.org/10.1145/3588028.3603693","url":null,"abstract":"In this paper, an MR visualization method based on sound field modeling is proposed. Using a small quantity of measurement data, the sound field was modeled using equivalent sources and room shapes acquired via SLAM. From the modeled sound field, the estimated room impulse responses at the target grid points were then animated to visualize the sound field using MR technology. Consequently, the animation of the sound field in MR clearly represented how sound propagates, including reflections.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129549833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An investigation of changes in taste perception by varying polygon resolution of foods in virtual environments","authors":"Taiyo Taguchi, Yurie Watanabe, Tomokazu Ishikawa","doi":"10.1145/3588028.3603689","DOIUrl":"https://doi.org/10.1145/3588028.3603689","url":null,"abstract":"In recent years, metaverse has received considerable attention. We believe that as this technology develops, humanity can dine in a virtual space while maintaining a sense of immersion. Therefore, we investigated whether the taste of food is influenced by the number of polygons of CG models using VR/AR technology. We created CG models and overlaid the image onto the actual food via HMD. Then the subjects consumed the food which CG image overlaid and answered a questionnaire. Results showed that the higher the number of polygons, the less hardness was perceived and the toon-like model was more likely to affect the taste.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133611573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fabrication of Edible lenticular lens","authors":"Takegi Yoshimoto, Nobuhito Kasahara, Homei Miyashita","doi":"10.1145/3588028.3603675","DOIUrl":"https://doi.org/10.1145/3588028.3603675","url":null,"abstract":"Lenticular lenses exhibit the color changing effect depending on the viewing angle and the vanishing effect in certain directions. In this study, we propose two fabrication methods for edible lenticular lenses. One is the mold forming method, and another is the knife cutting method using a knife with the inverse structure of a lenticular lens created by an SLA 3D printer. We also evaluate the properties of the end products. The IOR of material is optimized by using ray tracing simulation.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117293114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to Simulate Crowds with Crowds","authors":"Bilas Talukdar, Yunhao Zhang, Tomer Weiss","doi":"10.1145/3588028.3603670","DOIUrl":"https://doi.org/10.1145/3588028.3603670","url":null,"abstract":"Controlling agent behaviors with Reinforcement Learning is of continuing interest in multiple areas. One major focus is to simulate multi-agent crowds that avoid collisions while locomoting to their goals. Although avoiding collisions is important, it is also necessary to capture realistic anticipatory navigation behaviors. We introduce a novel methodology that includes: 1) an RL method for learning an optimal navigational policy, 2) position-based constraints for correcting policy navigational decisions, and 3) a crowd-sourcing framework for selecting policy control parameters. Based on optimally selected parameters, we train a multi-agent navigation policy, which we demonstrate on crowd benchmarks. We compare our method to existing works, and demonstrate that our approach achieves superior multi-agent behaviors.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114947534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}