L. Nyland, David K. McAllister, V. Popescu, Chris McCue, A. Lastra
{"title":"Interactive exploration of acquired 3D data","authors":"L. Nyland, David K. McAllister, V. Popescu, Chris McCue, A. Lastra","doi":"10.1117/12.384883","DOIUrl":"https://doi.org/10.1117/12.384883","url":null,"abstract":"The goal of our image-based rendering group is to accurately render scenes acquired from the real world. To achieve this goal, we capture scene data by taking 3D panoramic photographs from multiple locations and merge the acquired data into a single model from which real-time 3D rendering can be performed. In this paper, we describe our acquisition hardware and rendering system that seeks to achieve this goal, with particular emphasis on the techniques used to support interactive exploration.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123995622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing motion in video","authors":"L. Brown, S. Crayne","doi":"10.1117/12.384866","DOIUrl":"https://doi.org/10.1117/12.384866","url":null,"abstract":"In this paper, we present a visualization system and method for measuring, inspecting and analyzing motion in video. Starting from a simple motion video, the system creates a still image representation which we call a digital strobe photograph. Similar to visualization techniques used in conventional film photography to capture high-speed motion using strobe lamps or very fast shutters, and to capture time-lapse motion where the shutter is left open, this methodology creates a single image showing the motion of one or a small number of objects over time. Based on digital background subtraction, we assume that the background is stationary or at most slowing changing and that the camera position is fixed. The method is capable of displaying the motion based on a parameter indicating the time step between successive movements. It can also overcome problems of visualizing movement that is obscured by previous movements. The method is used in an educational software tool for children to measure and analyze various motions. Examples are given using simple physical objects such as balls and pendulums, astronomical events such as the path of the stars around the north pole at night, or the different types of locomotion used by snakes.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115615891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volumetric image display for complex 3D data visualization","authors":"C. Tsao, Jyhcheng. Chen","doi":"10.1117/12.384869","DOIUrl":"https://doi.org/10.1117/12.384869","url":null,"abstract":"A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"3905 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129936441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Freidlin, C. J. Ohazama, A. Arai, Delia P. McGarry, J. Panza, B. Trus
{"title":"NIHmagic: 3D visualization, registration, and segmentation tool","authors":"R. Freidlin, C. J. Ohazama, A. Arai, Delia P. McGarry, J. Panza, B. Trus","doi":"10.1117/12.384874","DOIUrl":"https://doi.org/10.1117/12.384874","url":null,"abstract":"Interactive visualization of multi-dimensional biological images has revolutionized diagnostic and therapy planning. Extracting complementary anatomical and functional information from different imaging modalities provides a synergistic analysis capability for quantitative and qualitative evaluation of the objects under examination. We have been developing NIHmagic, a visualization tool for research and clinical use, on the SGI OnyxII Infinite Reality platform. Images are reconstructed into a 3D volume by volume rendering, a display technique that employs 3D texture mapping to provide a translucent appearance to the object. A stack of slices is rendered into a volume by an opacity mapping function, where the opacity is determined by the intensity of the voxel and its distance from the viewer. NIHmagic incorporates 3D visualization of time-sequenced images, manual registration of 2D slices, segmentation of anatomical structures, and color-coded re-mapping of intensities. Visualization of MIR, PET, CT, Ultrasound, and 3D reconstructed electron microscopy images has been accomplished using NIHmagic.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128079861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation of body exposure to explosion","authors":"W. Oliver","doi":"10.1117/12.384861","DOIUrl":"https://doi.org/10.1117/12.384861","url":null,"abstract":"An ordinance-disposal expert was called to a remote site to dispose of a buried cache of explosives which had been hidden by a felon. The cache was buried in a forest and consisted ofa large number ofold sticks ofdynamite and blasting caps. The explosives and caps had been collected from abandoned mines and were old, corroded and fragile. The ordnance expert declined to explode the cache in place (a common and safe way ofdisposing ofold explosives) because he feared starting a forest fire. Instead, the explosives were removed from the burial site and moved to a dry streambed. The dynamite was burned. A small hole was dug in the streambed and the blasting caps as well as a few other small explosive devices were placed inside. Witnesses state that they were sent away from the site in preparation for the disposal. Instead ofthe usual shouting of \"fire in the hole\" followed by an explosion, there was simply an explosion. The witnesses returned to the site and found the explosives expert lying by the small pit in extremis. He died shortly thereafter of massive blast and shrapnel wounds. An important question in accidents such as this is whether it is the result of singular circumstances, lack of adherence to standard procedures, or failure of standard procedures. While the particulars of the fitli investigation are beyond the scope of this paper, a number of procedural questions were raised; these included the possibility ofgeneration ofstatic electricity from clothing, the possible presence of transmitters in the immediate area, and other factors.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132901639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time pose determination and reality registration system","authors":"C. Cohen, G. Beach, D. Haanpaa, C. Jacobus","doi":"10.1117/12.384867","DOIUrl":"https://doi.org/10.1117/12.384867","url":null,"abstract":"We have developed and demonstrated a vision-based pose determination and reality registration system for identifying objects in an unstructured visual environment. A wire-frame template of the object to be identified is compared to the input images form one or more cameras. If the object is found, an output of the object's position and orientation is computed. The placement of the template can be performed by a human in-the-loop, or through an automated real-time front end system. The three steps for classification and pose determination are comprised of two estimation modules and a module which refines the estimates to determine an answer. The first module in the sequence uses input images and models to generate a coarse pose estimate for the object. The second module in the sequence uses the estimates from the coarse pose estimation module, input images, and the model to further refine the pose. The last module in the sequence uses the fine pose estimation, the images, and the model to determine an exact match between the model and the image.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114490835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Thali, M. Braun, B. Kneubuehl, W. Brueschweiler, P. Vock, R. Dirnhofer
{"title":"Improved vision in forensic documentation: forensic 3D/CAD-supported photogrammetry of bodily injury external surfaces combined with volumetric radiologic scanning of bodily injury internal structures provides more investigative leads and stronger forensic evidence","authors":"M. Thali, M. Braun, B. Kneubuehl, W. Brueschweiler, P. Vock, R. Dirnhofer","doi":"10.1117/12.384876","DOIUrl":"https://doi.org/10.1117/12.384876","url":null,"abstract":"In the field of the documentation of forensics-relevant injuries, from the reconstructive point of view, the Forensic, 3D/CAD-supported Photometry plays an important role; particularly so when a detailed 3D reconstruction is vital. This was demonstrated with an experimentally-produced 'injury' to a head model, the 'skin-skull-brain model'. The injury-causing instrument, drawn from a real forensic case, was a specifically formed weapon.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128933049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visible Embryo Project: a platform for spatial genomics","authors":"M. Doyle, A. Noe, G. Michaels","doi":"10.1117/12.384880","DOIUrl":"https://doi.org/10.1117/12.384880","url":null,"abstract":"Congenital malformations and diseases have challenged the humanistic and intellectual resources of mankind for ages. Generations ofscientists have labored to understand their causes and effect their treatments. Efforts such as the Human Genome Project will produce the raw information base from which many cures and treatments may emerge. In order to begin to put meaning to the mountain ofdata being produced by the Gome Project, one must analyze this information within the context ofthe developing organism. The discipline ofstudy ofthe nature oforganism development has been referred to as embryology for over a century.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123151576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New generation of content addressable memories for associative processing","authors":"H. G. Lewis, Paul Giambalov","doi":"10.1117/12.384870","DOIUrl":"https://doi.org/10.1117/12.384870","url":null,"abstract":"Content addressable memories (CAMS) store both key and association data. A key is presented to the CAN when it is searched and all of the addresses are scanned in parallel to find the address referenced by the key. When a match occurs, the corresponding association is returned. With the explosion of telecommunications packet switching protocols, large data base servers, routers and search engines a new generation of dense sub-micron high throughput CAMS has been developed. The introduction of this paper presents a brief history and tutorial on CAMS, their many uses and advantages, and describes the architecture and functionality of several of MUSIC Semiconductors CAM devices. In subsequent sections of the paper we address using Associative Processing to accommodate the continued increase in sensor resolution, number of spectral bands, required coverage, the desire to implement real-time target cueing, and the data flow and image processing required for optimum performance of reconnaissance and surveillance Unmanned Aerial Vehicles (UAVs). To be competitive the system designer must provide the most computational power, per watt, per dollar, per cubic inch, within the boundaries of cost effective UAV environmental control systems. To address these problems we demonstrate leveraging DARPA and DoD funded Commercial Off-the-Shelf technology to integrate CAM based Associative Processing into a real-time heterogenous multiprocessing system for UAVs and other platforms with limited weight, volume and power budgets.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revitalizing the scatter plot","authors":"David A. Rabenhorst","doi":"10.1117/12.384881","DOIUrl":"https://doi.org/10.1117/12.384881","url":null,"abstract":"Computer-assisted interactive visualization has become a valuable tool for discovering the underlying meaning of tabular data, including categorical tabular data. The capabilities of the more traditionally mundane kinds of pictures like scatter plots can be expanded to usefully depict categorical tabular data by incorporating annotations and transforms, and by integrating the extensions into an interactive system.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121094480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}