Jordi Puig, A. Perkis, P. Pinel, Á. Cassinelli, M. Ishikawa
{"title":"The neuroscience social network project","authors":"Jordi Puig, A. Perkis, P. Pinel, Á. Cassinelli, M. Ishikawa","doi":"10.1145/2542302.2542327","DOIUrl":"https://doi.org/10.1145/2542302.2542327","url":null,"abstract":"Recent advances in neuroimaging over the last 15 years leaded to an explosion of knowledge in neuroscience and to the emergence of international projects and consortiums. Integration of existing knowledge as well as efficient communication between scientists are now challenging issues into the understanding of such a complex subject [Yarkoni et al., 2010]. Several Internet based tools are now available to provide databases and meta-analysis of published results (Neurosynth, Braimap, NIF, SumsDB, OpenfMRI...). These projects are aimed to provide access to activation maps and/or peak coordinates associated to semantic descriptors (cerebral mechanism, cognitive tasks, experimental stimuli...). However, these interfaces suffer from a lack of interactivity and do not allow real-time exchange of data and knowledge between authors. Moreover, classical modes of scientific communication (articles, meetings, lectures...) do not allow to create an active and updated view of the field for members of a specific community (large scientific structure, international work group...). In this view, we propose here to develop an interface designed to provide a direct mapping between neuroscientific knowledge and 3D brain anatomical space.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122310451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AMD \"be invincible\" commercial","authors":"Eszter Bohus","doi":"10.1145/2542398.2542494","DOIUrl":"https://doi.org/10.1145/2542398.2542494","url":null,"abstract":"Unruly hoards of applications battle the forces of AMD for computational dominance.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121191559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lifetime of goodtimes: all new Toyota Corolla","authors":"S. Bradley","doi":"10.1145/2542398.2542483","DOIUrl":"https://doi.org/10.1145/2542398.2542483","url":null,"abstract":"TVC for Toyota's All New 2013 Corolla.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116697421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huidong Bai, Lei Gao, Jihad El-Sana, M. Billinghurst
{"title":"Free-hand interaction for handheld augmented reality using an RGB-depth camera","authors":"Huidong Bai, Lei Gao, Jihad El-Sana, M. Billinghurst","doi":"10.1145/2543651.2543667","DOIUrl":"https://doi.org/10.1145/2543651.2543667","url":null,"abstract":"In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121701057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bet she'an","authors":"Annabel Sebag","doi":"10.1145/2542398.2542431","DOIUrl":"https://doi.org/10.1145/2542398.2542431","url":null,"abstract":"A sculptor decides to leave a trace of this dwindling humanity.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morten Nobel-Jørgensen, J. B. Nielsen, Anders Boesen Lindbo Larsen, Mikkel Damgaard Olsen, J. Frisvad, J. A. Bærentzen
{"title":"Pond of illusion: interacting through mixed reality","authors":"Morten Nobel-Jørgensen, J. B. Nielsen, Anders Boesen Lindbo Larsen, Mikkel Damgaard Olsen, J. Frisvad, J. A. Bærentzen","doi":"10.1145/2542302.2542334","DOIUrl":"https://doi.org/10.1145/2542302.2542334","url":null,"abstract":"Pond of Illusion is a mixed reality installation where a virtual space (the pond) is injected between two real spaces. The users are in either of the real spaces, and they can see each other through windows in the virtual space as illustrated in Figure 1(left). The installation attracts people to a large display in either of the real spaces by allowing them to feed virtual fish swimming in the pond. Figure 1(middle) shows how a Microsoft Kinect mounted on top of the display is used for detecting throw motions, which triggers virtual breadcrumbs to be thrown into the pond for feeding the nearby fish. Of course, the fish may not be available because they are busy eating what people have thrown into the pond from the other side.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coarse-grained multiresolution structures for mobile exploration of gigantic surface models","authors":"Marcos Balsa, E. Gobbetti, F. Marton, A. Tinti","doi":"10.1145/2543651.2543669","DOIUrl":"https://doi.org/10.1145/2543651.2543669","url":null,"abstract":"We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133547958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Topics on bible visualization: content, structure, citation","authors":"Hyoyoung Kim, Jin Wan Park","doi":"10.1145/2542256.2542261","DOIUrl":"https://doi.org/10.1145/2542256.2542261","url":null,"abstract":"Text visualization begins with understanding text itself which is material of visual expression. To visualize any text data, sufficient understanding about characteristics of the text first and the expressive approaches can be decided depending on the derived unique characteristics of the text. In this research we aimed to establish theoretical foundation about the approaches for text visualization by diverse examples of text visualization which are derived through the various characteristics of the text. To do this, we chose the 'Bible' text which is well known globally and digital data of it can be accessed easily and thus diverse text visualization examples exist and analyzed the examples of the bible text visualization. We derived the unique characteristics of text-content, structure, quotation- as criteria for analyzing and supported validity of analysis by adopting at least 2--3 examples for each criterion. In the result, we can comprehend that the goals and expressive approaches are decided depending on the unique characteristics of the Bible text. We expect to build theoretical method for choosing the materials and approaches by analyzing more diverse examples with various point of views on the basis of this research.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131766380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU compute for graphics","authors":"K. Hillesland","doi":"10.1145/2542266.2542275","DOIUrl":"https://doi.org/10.1145/2542266.2542275","url":null,"abstract":"Modern GPUs support more flexible programming models through systems such as DirectCompute, OpenGL compute, OpenCL, and CUDA. Although much has been made of GPGPU programming, this course focuses on the application of compute on GPUs for graphics in particular.\u0000 We will start with a brief overview of the underlying GPU architectures for compute. We will then discuss how the languages are constructed to help take advantage of these architectures and what the differences are. Since the focus is on application to graphics, we will discuss interoperability with graphics APIs and performance implications.\u0000 We will also address issues related to choosing between compute and other programmable graphics stages such as pixel or fragment shaders, as well as how to interact with these other graphics pipeline stages.\u0000 Finally, we will discuss instances where compute has been used specifically for graphics. The attendee will leave the course with a basic understanding of where they can make use of compute to accelerate or extend graphics applications.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134510561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-scale feature matching between 2D image and 3D model","authors":"Young Yi Lee, M. Park, J. Yoo, Kwan H. Lee","doi":"10.1145/2542302.2542320","DOIUrl":"https://doi.org/10.1145/2542302.2542320","url":null,"abstract":"Although high performance scanners and design tools are rapidly developed, users still suffer from generating desired 3D contents. Furthermore, it requires advanced skills of design tools and considerable times to produce a satisfactory result. Recently, being motivated by these shortcomings of 3D content generation, various approaches have been studied to generate and manipulate a 3D content conveniently. One of them is using 2D input data such as sketches and real photos to handle the 3D templates. Since this approach is intuitive to the users, it can be efficiently used to manipulate existing 3D contents. Moreover, a large number of new 3D contents can be created by using billions of existing 2D images on online.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132401896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}