Introduction to ImageBased Rendering Lining Yang yangl 1ornl
Introduction to Image-Based Rendering Lining Yang yangl 1@ornl. gov A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan and Philipp Slusallek. 11/18/2003
References: n n n S. E. Chen, “Quick. Time VR – An Image-Based Approach to Virtual Environment Navigation, ” Proc. SIGGRAPH ’ 95, pp. 2938, 1995 S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph, ” Proc SIGGRAPH ’ 96, pp. 43 -54, 1996 M. Levoy and P. Hanrahan, “Light Field Rendering, ” Proc. SIGGRAPH ’ 96, 1996. L. Mc. Millan and G. Bishop, “Plenoptic Modeling: An Image-Based Rendering System, ” Proc. SIGGRAPH ’ 95, pp. 39 -46, 1995 J. Shade, S. Gortler, Li-Wei He, and R. Szeliski, “Layered Depth Images, ” Proc. SIGGRAPH ’ 98, pp 231 -242, 1998 Heung-Yeung Shum, Li-Wei He, “Rendering With Concentric Mosaics, ” Proc. SIGGRAPH ’ 99, pp. 299 -306, 1999 11/18/2003
Problem Description n n Complex Rendering of Synthetic Scene takes too long to finish Interactivity is impossible Interactive visualization of extremely large scientific data is also not possible Image-Based Rendering (IBR) is used to accelerate the renderings. 11/18/2003
Examples of Complex Rendering Povray quaterly competition site March – June, 2001 11/18/2003
Examples of Large Dataset LLNL ASCI Quantum molecular simulation site 11/18/2003
Image-Based Rendering (IBR) n n The models for conventional polygon-based graphics have become too complex. IBR represents complex 3 D environments using a set of images from different (predefined) viewpoints It produces images for new views using these finite initial images and additional information, such as depth. The computation complexity is bounded by the image resolution, instead of the scene complexity. 11/18/2003
Image-Based Rendering (IBR) Mark Levoy’s 1997 Siggraph talk 11/18/2003
Overview of IBR Systems n n n Plenoptic Function Quicktime. VR Light fields/lumigraph Concentric Mosaics Plenoptic Modeling and Layered Depth Image 11/18/2003
Plenoptic Function n Plenoptic function (7 D) depicts light rays passing through: n n center of camera at any location (x, y, z) at any viewing angle ( , ) for every wavelength ( ) for any time ( t ) 11/18/2003
Limiting Dimensions of Plenoptic Functions n n n Plenoptic modeling (5 D) : ignore time & wavelength Lumigraph/Lightfield (4 D) : constrain the scene (or the camera view) to a bounding box 2 D Panorama : fix viewpoint, allow only the viewing direction and camera zoom to be changed 11/18/2003
Limiting Dimensions of Plenoptic Functions n Concentric mosaics (3 D) : index all input image rays in 3 parameters: radius, rotation angle and vertical elevation 11/18/2003
Quicktime VR n Using environmental maps n n n Cylindrical Cubic spherical At a fixed point, sample all the ray directions. Users can look in both horizontal and vertical directions 11/18/2003
Mars Pathfinder Panorama 11/18/2003
Creating a Cylindrical Panorama From www. quicktimevr. apple. com 11/18/2003
Commercial Products n n n Quick. Time VR, Live. Picture, IBM (Panoramix) Video. Brush IPIX (Photo. Bubbles), Be Here, etc. 11/18/2003
Panoramic Cameras n Rotating Cameras n n n Kodak Cirkut Globuscope Stationary Cameras n Be Here 11/18/2003
Quicktime VR n Advantages: n n n Using environmental map Easy and efficient Disadvantages: n n Cannot move away from the current viewpoint No Motion Parallax 11/18/2003
Light Field and Lumigraph • Take advantage of empty space to n Reduce Plenoptic Function to 4 D n n 11/18/2003 Object or viewpoint inside a convex hull Radiance does not change along a line unless blocked
Lightfield Parameterization n Parameterize the radiance lines by the intersections with two planes. n A light Slab t L(u, v, s, t) v u 11/18/2003 s
Two Plane Parametrization Focal plane (st) Camera plane (uv) 11/18/2003 Object
Reconstruction n n (u, v) and (s, t) can be calculated by determining the intersection of image ray with the two planes This can also be done via texture mapping n (x, y) to (u, v) or (s, t) is a projective mapping 11/18/2003
11/18/2003
Capturing Lightfields n n Need a 2 D set of (2 D) images Choices: n n Camera motion: human vs. computer Constraints on camera motion: planar vs. spherical n n 11/18/2003 Easier to construct Coverage and sampling uniformity
Light field gantry n Applications: n n Designed by n 11/18/2003 digitizing light fields measuring BRDFs range scanning Marc Levoy et al.
Light Field n n Key Ideas: 4 D function - Valid outside convex hull n 2 D slice = image - Insert to create - Extract to display 11/18/2003
Lightfields n Advantages: n n Simpler computation vs. traditional CG Cost independent of scene complexity Cost independent of material properties and other optical effects Disadvantages: n n n Static geometry Fixed lighting High storage cost 11/18/2003
Concentric Mosaics n Concentric mosaics : easy to capture, small in storage size 11/18/2003
Concentric Mosaics n A set of manifold mosaics constructed from slit images taken by cameras rotating on concentric circles 11/18/2003
Sample Images 11/18/2003
Rendering a Novel View 11/18/2003
Construction of Concentric Mosaics n Synthetic scenes n n uniform angular direction sampling square root sampling in radial direction 11/18/2003
Construction of Concentric Mosaics (2) n Real scenes Bulky, costly 11/18/2003 Cheaper, easier
Construction of Concentric Mosaics (3) n Problems with single camera: n n Limited horizontal fov Non-uniform spatial horizontal resolution Video sequence can be compressed with VQ and entropy encoding (25 X) Compressed stream gives 20 fps on PII 300 11/18/2003
Results 11/18/2003
Results (2) 11/18/2003
Image Warping n n n Mc. Millan’s 5 D plenoptic modeling system Render or capture reference views Creating Novel Views n n n Using reference views’ color and depth information with the warping equation For opaque scenes, the location or depth of the point reflecting the color is usually determined. Calculated using vision techniques for real imagery. 11/18/2003
Image Warping (filling holes) n n Dis-occlusion problem: Previously occluded objects in the reference view can be visible in the new view Fill in holes from other viewpoints or images (Mark William et al). 11/18/2003
Layered Depth Images n Different primitives according to depth values n n Image with depth LDI polygons 11/18/2003
Layered Depth Images n Idea: n n Handle disocclusion Store invisible geometry in depth images 11/18/2003
Layered Depth Image n Data structure: n n Per pixel list of depth samples Per depth sample: n n n 11/18/2003 RGBA Z Encoded: Normal direction, distance
Layered Depth Images n Computation: n Implicit ordering information n n Incremental warping computation n n LDI is broken into four regions according to epipolar point Start + xincr (back to front order) Splat size computation n 11/18/2003 Table lookup
Layered Depth Images 11/18/2003
- Slides: 42