{"title":"Multi-user VRML environment for teaching VRML: immersive collaborative learning","authors":"Vladimir Geroimenko, Mike Phillips","doi":"10.1109/IV.1999.781534","DOIUrl":"https://doi.org/10.1109/IV.1999.781534","url":null,"abstract":"VRML-based environments can be used very effectively for reaching a variety of online courses. This paper describes the development of an Internet-based collaborative learning environment in which VRML is not only the means but also the subject of teaching. Such a VRML environment is designed to assist and support employees of the 'new media' industries enrolled on short courses run by the Interactive Media Group in the School of Computing, University of Plymouth. This paper focuses on some key issues in the design of the VRML teaching environment and using it for real-time and on-demand course delivery. One of the most interesting issues is the experience of learning and teaching VRML while being within a VRML world. Such an immersive method of learning provides students with unique experiences and significantly increases the efficiency of the learning process.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132528658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards fast volume visualisation on the WWW","authors":"M. Krokos, F. Dong, G. Clapworthy, Jiaoying Shi","doi":"10.1109/IV.1999.781572","DOIUrl":"https://doi.org/10.1109/IV.1999.781572","url":null,"abstract":"The steady growth of the Internet has dramatically changed the way information is shared and modern users expect near real time delivery, high quality images together with in-depth navigation and exploration of 3D models. Multiresolution is a promising approach for fast distributed volume visualisation employing levels-of-detail. We review multiresolution algorithms and visualisation systems on the WWW. Our discussion is based on experience gained in the development of the IAEVA-II project funded by the European Commission. A new method for rapid data classification/rendering of multiresolution volumes based on shear-warp factorisation, is described. We can change classification functions and data resolution during rendering without significant reduction in interactivity. A method for constructing multiresolution transfer functions for determining opacity is also investigated. Finally, future trends in developing WWW visualisation systems are discussed.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133551586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Fukuchi, Y. Hirai, I. Kobayashi, Y. Hoshino, Yamamoto Kazuhiro
{"title":"Application of collaborative supported frame accurate animation for bridge construction project","authors":"Y. Fukuchi, Y. Hirai, I. Kobayashi, Y. Hoshino, Yamamoto Kazuhiro","doi":"10.1109/IV.1999.781554","DOIUrl":"https://doi.org/10.1109/IV.1999.781554","url":null,"abstract":"During a civil engineering construction process many types of professionals are involved, such as engineers, constructors, designers, clients and so forth. To provide mutual understanding among construction workers and to unify their ideas, the creation of an effective presentation is necessary. In addition, the construction process has to work relatively in harmony with the situation of the construction site. Moreover, the use of computer graphics (CG) can help to clearly visualize sequences of the construction process, simulate changes in the execution of the project and carry them out smoothly before the construction begins. The paper introduces the application of Frame Accurate Animation (FAA) as an important implement for construction management. It also explains the use of FAA in an illustrative example of the Sashiki Bridge (provisional name) construction project in Kumamoto. Our main expectation is for the work environment to become more cooperative, efficient and safe.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124763134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating synthetic image integrated with real images in Open Inventor","authors":"M. Abásolo, Francisco J. Perales López","doi":"10.1109/IV.1999.781592","DOIUrl":"https://doi.org/10.1109/IV.1999.781592","url":null,"abstract":"We describe a simple system for producing synthetic 3D scenes integrated with real images captured with a camera, by using the graphic library, Open Inventor. All parts of the system are described: real image capture, parameters measuring from the real scene, synthetic scene formation and coherent integration between the synthetic scene and the captured sequence. Finally we present some results of the proposed system.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127472337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integration of high-end and low-end animation tools: a case study of the production pathway for the Chicago Bulls broadcast animation","authors":"Carlos R. Morales","doi":"10.1109/IV.1999.781582","DOIUrl":"https://doi.org/10.1109/IV.1999.781582","url":null,"abstract":"The broadcast animation division of High Voltage Software was contracted by the Chicago Bulls Organization to complete an animated opening to be shown at the United Center and on television before each home game. A team of animators and compositors developed a production pathway based on the use of motion capture data to coordinate the use of high-end NURBS based animation and lower-end polygonal based animation tools, and compositing software.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123329687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual intelligence-turning data into knowledge","authors":"M. Jern","doi":"10.1109/IV.1999.781525","DOIUrl":"https://doi.org/10.1109/IV.1999.781525","url":null,"abstract":"Visual intelligence is a process that provides information visualization technology to address the challenge of discovering and exploiting information. For innovative companies, visual intelligence applications are improving their decision-making capabilities by performing spatial and multivariate visual data analysis and providing rapid access to comprehensible information. This paper examines the issue faced by most business-how to turn data into understandable business knowledge, and make this knowledge accessible to persons who rely on it. The role of information visualization techniques and the visual user interface (VUI) in the overall visual intelligence process is assessed. The integration of data warehousing, information visualization, Web and new visual interaction techniques will change and expand the paradigms of current work of humans using computers. Visual intelligence will improve visual communication that takes place in all elements of the user interface and provide decreased \"time-to-enlightenment\".","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126863234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A robust image mosaicing technique capable of creating integrated panoramas","authors":"Yihong Gong, Guido Proietti, D. LaRose","doi":"10.1109/IV.1999.781530","DOIUrl":"https://doi.org/10.1109/IV.1999.781530","url":null,"abstract":"Existing featureless image mosaicing techniques do not pay enough attention to the robustness of the image registration process, and are not able to combine multiple video sequences into an integrated panoramic view. These problems have certainly restricted applications of the existing methods for large-scale panorama composition, video content overview and information visualization. In this paper we propose a method that is able to create an integrated panoramic view for a virtual camera from multiple video sequences which each records a part of a vast scene. The method further enables the user to visualize the integrated panoramic view from an arbitrary viewpoint and orientation by altering the parameters of the virtual camera. To ensure a robust and accurate panoramic view synthesis from long video sequences, we attach a global positioning system (GPS) to the video camera, and utilize its output data to provide initial estimates for the camera's translational parameters, and to prevent the camera parameter recovery process from falling into spurious local minima. Our proposed method is not only suitable for video content overview but also applicable to the areas of information visualization, team collaborations, disastrous rescues, etc. The experimental results demonstrate the effectiveness of the proposed method.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122279730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optical occlusion and shadows in a 'see-through' augmented reality display","authors":"E. Tatham","doi":"10.1109/IV.1999.781548","DOIUrl":"https://doi.org/10.1109/IV.1999.781548","url":null,"abstract":"As distinct from virtual reality, which seeks to immerse the user in a fully synthetic world, computer-augmented reality systems supplement sensory input with computer-generated information. The principle has, for a number of years, been employed in the head-up display systems used by military pilots and usually comprises an optical display arrangement based on part-silvered mirrors that reflect computer graphics into the eye in such a way that they appear superimposed on the real-world view. Compositing real and virtual worlds offers many new and exciting possibilities but also presents some significant challenges, particularly with respect to applications for which the real and virtual elements need to be integrated convincingly. Unfortunately, the inherent difficulties are compounded further in situations where a direct, unpixellated view of the real world is desired, since current optical systems do not allow real-virtual occlusion, nor a number of other essential visual interactions. The paper presents a generic model of augmented reality as a context for discussion, and then describes a simple but effective technique for providing a significant degree of control over the visual compositing of real and virtual worlds.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128884454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A prototype Hotel Browsing system using Java3D","authors":"D. Ball, M. Mirmehdi","doi":"10.1109/IV.1999.781597","DOIUrl":"https://doi.org/10.1109/IV.1999.781597","url":null,"abstract":"Java3D is an application-centred approach to building 3D worlds. We use Java3D and VRML to design a prototype WWW-based 3D Hotel Browsing system. A Java3D scene graph viewer was implemented to interactively explore objects in a virtual universe using models generated by a commercial computer graphics suite and imported using a VRML file loader. A special collision prevention mechanism is also devised. This case study is reported on by reviewing the current aspects of the prototype system.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134440766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information highlighting","authors":"Timothy Ostler","doi":"10.1109/IV.1999.781608","DOIUrl":"https://doi.org/10.1109/IV.1999.781608","url":null,"abstract":"The paper reports on an empirical study in which, for the purposes of developing an automatic highlighting tool, 11 subjects were asked to highlight important passages in an 1111-word text. These results were cross-referenced with a range of word attributes in order to test hypotheses about the principles underlying highlighting decisions. With this data, a combination of selection criteria was proposed that was able to predict the probability of highlighting with a correlation of approximately 0.56, compared with an average correlation of 0.47 amongst the test subjects, and a figure of 0.30 for Word97's highlighting feature. The paper argues that the common factor behind the most successful hypotheses was that they are all signals denoting \"new\" as opposed to \"given\" information at the discourse level. Although based on a very limited sample, this observation seems clear enough to make detecting such signals a promising candidate for further research.","PeriodicalId":340240,"journal":{"name":"1999 IEEE International Conference on Information Visualization (Cat. No. PR00210)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124175680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}