Author: paul.rosen

Improved 3-D Scene Sampling By Camera Model Design

This dissertation proposes a new paradigm of problem solving, dubbed Camera Model Design, which overcomes the limitations of the planar pinhole camera model to address many problems which still exist in computer graphics, visualization, and computer vision. The Camera Model Design paradigm stresses four important ideas. First, relax the constraints of the planar pinhole camera model allowing generalized camera rays which are no longer straight and no longer converge. Second, the choice of camera used for a particular application need not be limited to the planar pinhole camera. Third, camera models should no longer be static. Finally, in order to support interactive exploration, a high level of computational efficiency should be maintained.

Continue reading

A High-Quality High-Fidelity Visualization Of The September 11 Attack On The World Trade Center

In this application paper we describe the efforts of a multi-disciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York’s World Trade Center. This was achieved by first designing and computing a finite element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system.

Continue reading

Image Warping For Compressing And Spatially Organizing A Dense Collection Of Images

We describe a spatial image hierarchy combined with an image compression scheme that meets the requirements of interactive IBR walkthroughs. By using image warping and exploiting image coherence over the image capture plane, we achieve compression performance similar to traditional motion-compensated schema, e.g., MPEG, yet allow image access along arbitrary paths.

Continue reading

Three-Dimensional Display Rendering Acceleration Using Occlusion Camera Reference Images

Volumetric three-dimensional (3-D) displays allow the user to explore a 3-D scene free of joysticks, keyboards, goggles, or trackers. For non-trivial scenes, computing and transferring a 3-D image to the display takes hundreds of seconds, which is a serious bottleneck for many applications. We propose to represent the 3-D scene with an occlusion camera reference image (OCRI) for more efficient rendering.

Continue reading

Study Of The Perception Of Three-Dimensional Spatial Relations For A Volumetric Display

We test perception of 3-D spatial relations in 3-D images rendered by a 3-D display and compare it to that of a high-resolution flat panel display. We test the subject’s ability to determine whether or not an object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far).

Continue reading