Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Accurate, Efficient, and Robust 3D Reconstruction of Static and Dynamic Objects

Abstract

3D reconstruction is the method of creating the shape and appearance of a real scene or objects, given a set of images on the scene. Realistic scene or object reconstruction is essential in many applications such as robotics, computer graphics, Tele- Immersion (TI), and Augmented Reality (AR). This thesis explores accurate, efficient, and robust methods for the 3D reconstruction of static and dynamic objects from RGB-D images. For accurate 3D reconstruction, the depth maps should have high geometric quality and resolution. However, depth maps are often captured at low-quality or low resolution, due to either sensor hardware limitations or errors in estimation. A new sampling-based robust multi-lateral filtering method is proposed herein to improve the resolution and quality of depth data. The enhancement is achieved by selecting reliable depth samples from a neighborhood of pixels and applying multi-lateral filtering using colored images that are both high-quality and high-resolution. Camera pose estimation is one of the most important operations in 3D reconstruction, since any minor error in this process may distort the resulting reconstruction. We present a robust method for camera tracking and surface mapping using a handheld RGB-D camera, which is effective for challenging situations such as during fast camera motion or in geometrically featureless scenes. This is based on the quaternion-based orientation estimation method for initial sparse estimation and a weighted Iterative Closest Point (ICP) method for dense estimation to achieve a better rate of convergence for both the optimization and accuracy of the resulting trajectory. We present a novel approach for the reconstruction of static object/scene with realistic surface geometry using a handheld RGB-D camera. To obtain high-resolution RGB images, an additional HD camera is attached to the top of a Kinect and is calibrated to reconstruct a 3D model with realistic surface geometry and high-quality color textures. We extend our depth map refinement method by utilizing high frequency information in color images to recover finer-scale surface geometry. In addition, we use our robust camera pose estimation to estimate the orientation of the camera in the global coordinate system accurately. For the reconstruction of moving objects, a novel dynamic scene reconstruction system using multiple commodity depth cameras is proposed. Instead of using expensive multi-view scene capturing setups, our system only requires four Kinects, which are carefully located to generate full 3D surface models of objects. We introduce a novel depth synthesis method for point cloud densification and noise removal in the depth data. In addition, a new weighting function is presented to overcome the drawbacks of the existing volumetric representation method

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View