{"title":"G-spacing: a gyro sensor based relative 3D space positioning scheme","authors":"Kaiqiang Liu, I-Peng Lin, Shih-Wei Sun, Wen-Huang Cheng, Xiaoniu Su-Chu Hsu","doi":"10.1145/2787626.2792631","DOIUrl":"https://doi.org/10.1145/2787626.2792631","url":null,"abstract":"Interaction with virtual objects among different devices attracts lots of attention recently. LuminAR [Linder and Maes] was developed for a portable and compact projector-camera system for interactive displaying. THAW [Leigh et al.] was proposed to use a back-facing camera of a smartphone to assist the interactive displaying. RealSense [Lin et al.] was adopted the built-in compass sensor on a mobile device to calibrate the relative position among different mobile devices. However, the complex calibration process of LuminAR [Linder and Maes] and THAW [Leigh et al.] limited the applications. On the other hand, as addressed by the authors, once the users with mobile devices using RealSense [Lin et al.] move larger than 15°, the positioning relationship cannot be kept stable. Therefore, in this paper, a 3D positioning scheme is proposed based on the built-in gyro sensor on a mobile device for effective and intuitive calibration and allow users to freely move the mobile devices with a natural user experience.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132934829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bruno Marques, Nazim Haouchine, Rosalie Plantefève, S. Cotin
{"title":"Improving depth perception during surgical augmented reality","authors":"Bruno Marques, Nazim Haouchine, Rosalie Plantefève, S. Cotin","doi":"10.1145/2787626.2792654","DOIUrl":"https://doi.org/10.1145/2787626.2792654","url":null,"abstract":"Minimally invasive surgery (MIS) is a recent surgical technique where the surgeon does not interact directly with the patient's organs. In contrast to open surgery, the surgeon manipulates the organs through instruments inserted in the patient's abdominal cavity while observing the organ from a display showing the video stream captured by an endoscopic camera. While the benefits of MIS for patients are clearly claimed, performing these operations remains very challenging for the surgeons, due to the loss of depth perception caused by this indirect manipulation. To tackle this limitation, the research community suggests to use augmented reality (AR) during the procedure [Haouchine et al. 2013]. The objective towards the use of AR during surgery is to be able to overlay the 3D model of the organ (that can be obtained from a pre-operative scan of the patient) onto the video stream. Surgical AR made considerable advances and reached a certain maturity in the estimation of tumors and vessels localisation. Howerver, very few studies have investigated depth perception and visualization of internal structures [Lerotic et al. 2007], which is considered by surgeons as a very sensitive issue. This study suggests a method to compensate the loss of depth perception while enhancing organ vessels and tumors to surgeons. This method relies on a combination of contour rendering technique and adaptive alpha blending to effectively perceive the vessels and tumors depth. In addition, this technique is designed to achieve real-time to satisfy the requirements of clinical routines, and has been tested on real human surgery.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134520439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Nishida, Hikaru Takatori, Kosuke Sato, Kenji Suzuki
{"title":"CHILDHOOD: wearable suit for augmented child experience","authors":"Jun Nishida, Hikaru Takatori, Kosuke Sato, Kenji Suzuki","doi":"10.1145/2787626.2792656","DOIUrl":"https://doi.org/10.1145/2787626.2792656","url":null,"abstract":"Understanding and perceiving the world from a child's perspective is a very important key not only to design products and architecture, but also to remind staff who work closely with children, such as hospitals and kindergartens. Ida et al. investigated the universality of devices and architecture in public spaces by recording videos through a hand-held camera positioned at a child's eye level [Ida et al. 2010]. In this study, we propose a novel wearable suit called CHILDHOOD that virtually realizes a child's eye and hand movements by attaching a viewpoint translator and hand exoskeletons (Figure 1a). We hypothesized that virtualizing a child's body size by transforming our own body while preserving embodied interactions with actual surroundings would provide an augmented experience of a child's perspective. This could assist designers in evaluating product accessibility through their own body interactions in real time. In addition, augmented child experience can help staff and parents remember how children feel and touch the world.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120817208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengcheng Tang, Xiang Sun, A. Gomes, J. Wallner, H. Pottmann
{"title":"Form-finding with polyhedral meshes made simple","authors":"Chengcheng Tang, Xiang Sun, A. Gomes, J. Wallner, H. Pottmann","doi":"10.1145/2787626.2787631","DOIUrl":"https://doi.org/10.1145/2787626.2787631","url":null,"abstract":"We solve the form-finding problem for polyhedral meshes in a way which combines form, function and fabrication; taking care of user-specified constraints like boundary interpolation, planarity of faces, statics, panel size and shape, enclosed volume, and cost. Our main application is the interactive modeling of meshes for architectural and industrial design. Our approach can be described as guided exploration of the constraint space whose algebraic structure is simplified by introducing auxiliary variables and ensuring that constraints are at most quadratic.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122921880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yong Yi Lee, Junho Choi, Yong Hwi Kim, Jong Hun Lee, M. Son, Bilal Ahmed, Kwan H. Lee
{"title":"RiSE: reflectance transformation imaging in spatial augmented reality for exhibition of cultural heritage","authors":"Yong Yi Lee, Junho Choi, Yong Hwi Kim, Jong Hun Lee, M. Son, Bilal Ahmed, Kwan H. Lee","doi":"10.1145/2787626.2792626","DOIUrl":"https://doi.org/10.1145/2787626.2792626","url":null,"abstract":"Traditional museums have shown interest in exhibiting a meaningful representation of cultural heritage. However, existing stereotypical exhibition fails to attract the visitors' interest continuously as it provides only static and non-interactive contents and transmits information unilaterally. Recently, high performance measurement techniques have rapidly developed to a degree that allows for the realistic digitization of cultural heritage. Based on this digitized cultural heritage, dynamic and interactive content, such as 3D video and augmented reality, have been made to improve the immersion of visitors. In spite of these attempts, the sense of artificiality is still a challenge because most existing methods demonstrate their content via screen displays.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128401950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Hafner, Przemyslaw Musialski, T. Auzinger, M. Wimmer, L. Kobbelt
{"title":"Optimization of natural frequencies for fabrication-aware shape modeling","authors":"Christian Hafner, Przemyslaw Musialski, T. Auzinger, M. Wimmer, L. Kobbelt","doi":"10.1145/2787626.2787644","DOIUrl":"https://doi.org/10.1145/2787626.2787644","url":null,"abstract":"Keyboard percussion instruments such as xylophones and glockenspiels are composed of an arrangement of bars. These are varied in some of their geometrical properties---typically the length---in order to influence their acoustic behavior. Most instruments in this family do not deviate from simple geometrical shapes, since designing the natural frequency spectrum of complex shapes usually involves a pain-staking trial-and-error process and has been reserved to gifted artisans or professional manufacturers.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123488648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Schmal, Christoph K. Thomas, J. Cushing, G. Orr
{"title":"Visualizing valley wind flow","authors":"K. Schmal, Christoph K. Thomas, J. Cushing, G. Orr","doi":"10.1145/2787626.2792647","DOIUrl":"https://doi.org/10.1145/2787626.2792647","url":null,"abstract":"The field of micrometeorology is primarily concerned with smaller-scale meteorological phenomena, specifically those which occur within the lowest atmospheric layer called the Atmospheric Boundary Layer (ABL). The primary defining characteristic of the ABL is that wind dynamics within this layer are influenced by the Earth's topography, as well as time-dependent temperature changes in the Earth's surface. In forests and connected valleys, weak-wind flows transport moisture, heat, gases and potential contaminants, directly impacting adjacent ecosystems [Thomas et al. 2012].","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126316702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UnAMT: unsupervised adaptive matting tool for large-scale object collections","authors":"Jaehwan Kim, Jongyoul Park, Kyoung Park","doi":"10.1145/2787626.2792644","DOIUrl":"https://doi.org/10.1145/2787626.2792644","url":null,"abstract":"Unsupervised matting, whose goal is to extract interesting foreground components from arbitrary and natural background regions without any additional information of the contents of the corresponding scenes, plays an important role in many computer vision and graphics applications. Especially, the precisely extracted object images from the matting process can be useful for automatic generation of large-scale annotated training sets with more accuracy, as well as for improving the performance of a variety of applications including content-based image retrieval. However, unsupervised matting problem is intrinsically ill-posed so that it is hard to generate a perfect segmented object matte from a given image without any prior knowledge. This additional information is usually fed by means of a trimap which is a rough pre-segmented image consisting of three subregions of foreground, background and unknown. When such matting process is applied to object collections in a large-scale image set, the requirement for manually specifying every trimap for each of independent input images can be a serious drawback definitely. Recently, automatic detection of salient object regions in images has been widely researched in computer vision tasks including image segmentation, object recognition and so on. Although there are many different types of proposal measures in methodology under the common perceptual assumption of a salient region standing out its surrounding neighbors and capturing the attention of a human observer, most final saliency maps having lots of noises are not sufficient to take advantage of the consequent computational processes of highly accurate low-level representation of images.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126509357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Musasabi: 2D/3D intuitive and detailed visualization system for the forest","authors":"Takuya Kato, A. Kato, Naomi Okamura, Taro Kanai, Ryo Suzuki, Yuko Shirai","doi":"10.1145/2787626.2792621","DOIUrl":"https://doi.org/10.1145/2787626.2792621","url":null,"abstract":"Trees have been a pillar of our lives not just for human but for all the species living in the earth. Despite of its blessings for our lives, the heaps of problems around forestry have not been solved. One of the major problems in this field is that most of the forest are not been sorted into an organized database. Detailed natural data have never been provided even in famous map applications, Google earth for instance, induced from its difficulty. The forest database has been demanded in many regions as it provides beneficial information for both industrial and environmental aspects. It even helps many divisions such as CG animations to simulate not only a tree itself but also the mountain or the forest as a whole depending on given natural conditions.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129424374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caigui Jiang, Chengcheng Tang, Jun Wang, J. Wallner, H. Pottmann
{"title":"Freeform honeycomb structures and lobel frames","authors":"Caigui Jiang, Chengcheng Tang, Jun Wang, J. Wallner, H. Pottmann","doi":"10.1145/2787626.2787661","DOIUrl":"https://doi.org/10.1145/2787626.2787661","url":null,"abstract":"In freeform architecture and fabrication aware design, repetitive geometry is a very important contribution to the reduction of production costs. This poster addresses two closely related geometric rationalizations of freeform surfaces with repetitive elements: freeform honeycomb structures defined as torsion-free structures where the walls of cells meet at 120 degrees, and Lobel frames formed by equilateral triangles. There turns out to be an interesting duality between these two structures, and this poster discusses the geometric relation, computation, modeling as well as applications of them.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115457611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}