{"title":"Large-scale fabrication with interior zometool structure","authors":"Ming-Shiuan Chen, I-Chao Shen, Chun-Kai Huang, Bing-Yu Chen","doi":"10.1145/3230744.3230780","DOIUrl":"https://doi.org/10.1145/3230744.3230780","url":null,"abstract":"In recent years, personalized fabrication has attracted many attentions due to the widespread of consumer-level 3D printers. However, consumer 3D printers still suffer from shortcomings such as long production time and limited output size, which are undesirable factors to large-scale rapid-prototyping. We propose a hybrid 3D fabrication method that combines 3D printing and Zometool structure for both time/cost-effective fabrication of large objects. The key of our approach is to utilize compact, sturdy and re-usable internal structure (Zometool) to infill fabrications and replace both time and material-consuming 3D-printed materials. Unlike the laser-cutted shape used in [Song et al. 2016], we are able to reuse the inner structure. As a result, we can significantly reduce the cost and time by printing thin 3D external shells only.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Branchaud, Walter Muskovic, M. Kavallaris, Daniel Filonik, T. Bednarz
{"title":"Visual microscope for massive genomics datasets, expanded perception and interaction","authors":"D. Branchaud, Walter Muskovic, M. Kavallaris, Daniel Filonik, T. Bednarz","doi":"10.1145/3230744.3230745","DOIUrl":"https://doi.org/10.1145/3230744.3230745","url":null,"abstract":"An innovative fully interactive and ultra-high resolution navigation tool has been developed to browse and analyze gene expression levels from human cancer cells, acting as a visual microscope on data. The tool uses high-performance visualization and computer graphics technology to enable genome scientists to observe the evolution of regulatory elements across time and gain valuable insights from their dataset as never before.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126043552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Withering fruits: vegetable matter decay and fungus growth","authors":"B. Cirdei, E. Anderson","doi":"10.1145/3230744.3230783","DOIUrl":"https://doi.org/10.1145/3230744.3230783","url":null,"abstract":"We propose a parametrised method for recreating drying and decaying vegetable matter from the fruits category, taking into account the biological characteristics of the decaying fruit. The simulation addresses three main phenomena: mould propagation, volume shrinking and fungus growth on the fruit's surface. The spread of decay is achieved using a Reaction-Diffusion method, a Finite Element Method is used for shrinking and wrinkling of the fruit shell, while the spread of the fruit's fungal infection is described by a Diffusion Limited Aggregation algorithm. We extend existing fruit decay approaches, improving the shrinking behaviour of decaying fruits and adding independent fungal growth. Our approach integrates a user interface for artist directability and fine control of the simulation parameters.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121660765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takuro Nakao, Yun Suen Pai, M. Isogai, H. Kimata, K. Kunze
{"title":"Make-a-face: a hands-free, non-intrusive device for tongue/mouth/cheek input using EMG","authors":"Takuro Nakao, Yun Suen Pai, M. Isogai, H. Kimata, K. Kunze","doi":"10.1145/3230744.3230784","DOIUrl":"https://doi.org/10.1145/3230744.3230784","url":null,"abstract":"Current devices aim to be more hands-free by providing users with the means to interact with them using other forms of input, such as voice which can be intrusive. We propose Make-a-Face; a wearable device that allows the user to use tongue, mouth, or cheek gestures via a mask-shaped device that senses muscle movement on the lower half of the face. The significance of this approach is threefold: 1) It allows a more non-intrusive approach to interaction, 2) we designed both the hardware and software from the ground-up to accommodate the sensor electrodes and 3) we proposed several use-case scenarios ranging from smartphones to interactions with virtual reality (VR) content.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121109381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a stochastic depth maps estimation for textureless and quite specular surfaces","authors":"Abdelhak Saouli, M. C. Babahenini","doi":"10.1145/3230744.3230762","DOIUrl":"https://doi.org/10.1145/3230744.3230762","url":null,"abstract":"The human brain is constantly solving enormous and challenging optimization problems in vision. Due to the formidable meta-heuristics engine our brain equipped with, in addition to the widespread associative inputs from all other senses that act as the perfect initial guesses for a heuristic algorithm, the produced solutions are guaranteed to be optimal. By the same token, we address the problem of computing the depth and normal maps of a given scene under a natural but unknown illumination utilizing particle swarm optimization (PSO) to maximize a sophisticated photo-consistency function. For each output pixel, the swarm is initialized with good guesses starting with SIFT features as well as the optimal solution (depth, normal) found previously during the optimization. This leads to significantly better accuracy and robustness to textureless or quite specular surfaces.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132463538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Make your own retinal projector: retinal near-eye displays via metamaterials","authors":"Yoichi Ochiai, Kazuki Otao, Yuta Itoh, Shouki Imai, Kazuki Takazawa, Hiroyuki Osone, Atsushi Mori, Ippei Suzuki","doi":"10.1145/3230744.3230810","DOIUrl":"https://doi.org/10.1145/3230744.3230810","url":null,"abstract":"Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also a possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed. Maxwellian optics based retinal projection was proposed in 1990s [Kollin 1993]. Laser scanning [Liao and Tsai 2009], laser projection using spatial light modulator (SLM) or holographic optical elements were also explored [Jang et al. 2017]. In the commercial field, QD Laser1 with a viewing angle of 26 degrees is available. However, as the lenses and iris of an eyeball are in front of the retina, which is a limitation of a human eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. Due to these problems, retinal projection displays are still a rare commodity because of their difficulty in optical schematics design.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134267985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nao Asano, Katsutoshi Masai, Yuta Sugiura, M. Sugimoto
{"title":"3D facial geometry analysis and estimation using embedded optical sensors on smart eyewear","authors":"Nao Asano, Katsutoshi Masai, Yuta Sugiura, M. Sugimoto","doi":"10.1145/3230744.3230812","DOIUrl":"https://doi.org/10.1145/3230744.3230812","url":null,"abstract":"Facial performance capture is used for animation production that projects a performer's facial expression to a computer graphics model. Retro-reflective markers and cameras are widely used for the performance capture. To capture expressions, we need to place markers on the performer's face and calibrate the intrinsic and extrinsic parameters of cameras in advance. However, the measurable space is limited to the calibrated area. In this study, we propose a system to capture facial performance using a smart eyewear with photo-reflective sensors and machine learning technique. Also, we show a result of principal components analysis of facial geometry to determine a good estimation parameter set.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132003376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solar projector","authors":"Kenta Yamamoto, Kotaro Omomo, Kazuki Takazawa, Yoichi Ochiai","doi":"10.1145/3230744.3230767","DOIUrl":"https://doi.org/10.1145/3230744.3230767","url":null,"abstract":"The sun is the most universal, powerful and familiar energy available on the planet. Every organism and plant has evolved over the years, corresponding to the energy brought by the sun. Humanity is no exception. We have invented many artificial lights since Edison invented light bulbs. In recent years, LEDs are one of the most representative examples. Displays and projectors using LEDs are still being actively developed. However, it is difficult to reproduce ideal light with high brightness and wide wavelength like sunlight. Furthermore, considering low energy sustainability and environmental contamination in the manufacturing process, artificial light can not surpass the sunlight. Against this backdrop, projects that utilize sunlight have been actively carried out in the world. Concentrating Solar Power (CSP) generate electricity using the heat of sunlight to turn turbines [Müller-Steinhagen and Trieb 2004]. [Koizumi 2017] is an aerial image presentation system using the sun as a light source. Digital sundials use the shadow of sunlight to inform digital time [Scharstein et al. 1996]. These projects attempt to use the direct sunlight without any conversion and minimize the energy loss.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132300796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivo Aluízio Stinghen Filho, Estevam Nicolas Chen, J. Junior, Ricardo da Silva Barboza
{"title":"Gesture recognition using leap motion: a comparison between machine learning algorithms","authors":"Ivo Aluízio Stinghen Filho, Estevam Nicolas Chen, J. Junior, Ricardo da Silva Barboza","doi":"10.1145/3230744.3230750","DOIUrl":"https://doi.org/10.1145/3230744.3230750","url":null,"abstract":"In this paper we compare the effectiveness of various methods of machine learning algorithms for real-time hand gesture recognition, in order to find the most optimal way to identify static hand gestures, as well as the most optimal sample size for use during the training step of the algorithms. In our framework, Leap Motion and Unity were used to extract the data. The data was then used to be trained using Python and scikit-learn. Utilizing normalized information regarding the hands and fingers, we managed to get a hit rate of 97% using the decision tree classifier.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134234421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Barbieri, Tao Jiang, B. Cawthorne, Zhidong Xiao, Xiaosong Yang
{"title":"3D content creation exploiting 2D character animation","authors":"Simone Barbieri, Tao Jiang, B. Cawthorne, Zhidong Xiao, Xiaosong Yang","doi":"10.1145/3230744.3230769","DOIUrl":"https://doi.org/10.1145/3230744.3230769","url":null,"abstract":"While 3D animation is constantly increasing its popularity, 2D is still largely in use in animation production. In fact, 2D has two main advantages. The first one is economic, as it is more rapid to produce, having a dimension less to consider. The second one is important for the artists, as 2D characters usually have highly distinctive traits, which are lost in a 3D transposition. An iconic example is Mickey Mouse, whom ears appear circular no matter which way he is facing.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"26 1-4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127461728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}