Friday, January 9, 2015

System Overview


Week 1 Update:

This week, I have been working on improved mouse/keyboard camera control to the code that I have been developing. I have also added the brick pool data structure described in GigaVoxels, which is used for trilinear interpolation in texture memory when rendering an octree through cone tracing.

Here is a general outline for the system that I expect to be developing based on a merger of concepts from KinectFusion, OctoMap, and GigaVoxels. The steps of the pipeline are:
  1. Receive Image from RGB-D Camera
  2. Compute Vertices/Normals for Each Pixel
  3. Predict Real Camera Pose through ICP
  4. Update Virtual Camera Pose with Keyboard Input
  5. Transfer CPU/GPU Octree Memory based on Camera Poses
  6. Cast Rays from Predicted Camera to Camera Points to Update Octree Map
  7. Render Virtual Camera on Screen
  8. Render Predicted Camera Image to Texture for ICP in the Next Iteration
1.) Receive Image from RGB-D Camera

This first stage provides raw sensor input that will be used to build a virtual map. The RGB-D camera will provide both color and depth information. I have purchased a Structure Sensor that will be used as the primary device for this project. The Structure Sensor is a small active depth camera backed by Kickstarter that is intended for use with mobile devices. The camera provides 640x480 resolution at more than 30 fps. 
The Structure Sensor is a product of Occipital, Inc.
A very appealing feature is that Occipital, the company behind the device, is continuing to support the OpenNI project on Github for the device. OpenNI is the Open Natural Interface standard that is being developed as an abstract interface to these RGB-D devices, though support for it had ended a few years ago. It is nice to see Occipital pick up support of the project, and it is desirable to develop this project in a way that is abstracted from a particular device driver.

Interfacing to the device through OpenNI should only take a few days work. I will start that task next week as the device has just arrived.

2.) Compute Vertices/Normals for Each Pixel

This is the easiest step in the pipeline. The camera stream only provides position and color information natively. Normals are necessary for both for localization and rendering. An early step of the pipeline will use CUDA to parallelize over each pixel in the camera image and compute the vertex for each point using the camera calibration matrix, then computing normals from cross products of adjacent vertices in each direction. An additional bilateral filtering step may be necessary here as well, in which case we will follow a similar method as outlined in KinectFusion.

3.) Predict Real Camera Pose from ICP

In order to use the new camera image to update a world map, we must first predict the position and orientation of the camera relative to the world. This can be done iteratively, where a pose estimate is made in each frame relative to the previous only. For slow camera motions and fast frame rates, these motions will be very small. The net pose of the camera is the composition of transforms of motions between each frame since the start of the process.

KinectFusion makes these iterative predictions using Iterative Closest Point (ICP). This process selects pairs of corresponding points between data sets, then finds an affine transform that can be applied to one set that minimizes the sum of squared distances between all correspondence pairs. Part of what makes this a challenging problem is how to accurately choose corresponding pairs. KinectFusion does this by projecting both frames into the camera space, and points occupying the same 2D pixel location should match. 


KinectFusion finds a camera pose T that minimizes this quantity.

KinectFusion also attempts to minimize the distance between points only in the direction of the normal between the points. This is essentially performing a point-to-plane match that tends to be less noisy. This is because it is very unlikely that two scans will sample identical points on a surface, but it is likely that points will belong to the same surface.

KinectFusion throws out point correspondences where either the difference between normals, or the distance between vertices exceed a threshold. The original KinectFusion did not incorporate color, but we will include a threshold on the difference between color values as well.

The process of ICP in KinectFusion computes terms for each correspondence pair in parallel on the GPU, then compacts/sums before minimizing with Cholesky decomposition on the CPU. 

4.) Update Virtual Camera Pose with Keyboard Input

This is another very simple task in the pipeline. This week, I have developed sufficient keyboard and mouse camera control for this project. These camera controls query GLFW in each frame to determine the new camera pose, rather than a callback-based approach that we previously were using. It uses WASD keys to translate the camera origin in a direction based on its orientation. The mouse can rotate the camera by clicking and dragging. While dragging, the mouse disappears and cannot leave the screen. This allows the user to continuously rotate the camera without screen space limitations. Finally, the scroll wheel adjusts the projection zoom. This control scheme will be familiar to anyone who has played a modern PC first-person-shooter.

5.) Transfer CPU/GPU Octree Memory based on Camera Poses

This will be one of the more challenging aspects of the project. This will likely require a stack-based octree data structure on the CPU, with an associated API that can send/receive portions of the tree to the GPU, converting to its stack-less structure. It will draw upon work from GigaVoxels that uses a Least Recently Updated (LRU) method for determining when data can safely be removed from active memory. This aspect of the project will require additional algorithmic development or research.

6.) Cast Rays from Predicted Camera to Camera Points to Update Octree Map

This step actually builds the octree map from camera color and depth. The update method will be heavily based upon OctoMap, though may include additional GPU optimization. This will cast a ray from the origin of the real camera to each point in a point cloud generated from the depth image. The points along the ray update the map with "miss" observations, and the end point will be updated with a "hit." This will decrease or increase the probability that a voxel is occupied by a value determined by a model of the camera. This model will contain a probability of hit and a probability of miss value, which will be determined experimentally. We will use this probability as the alpha channel of the voxel.

7.) Render Virtual Camera on Screen

At this point, we need to render the virtual camera to the screen. Remember, the virtual camera does not need to be at the same position and orientation in the world as the real camera. This step is essentially the reverse of the previous, initially it will cast a ray in the octree and accumulate color weighted by alpha channel until the total alpha reaches 1.

An improved version of this step will use cone tracing. This approach will use higher levels of the octree as the ray steps further from the camera, behaving like a cone. This will provide global illumination effects and is based upon the work of GigaVoxels.

8.) Render Predicted Camera Image to Texture for ICP in the Next Iteration

The last step is identical to the previous in terms of functionality. However, instead of rendering from the virtual camera that a user is controlling on the computer, this will render an image from the predicted view of the actual camera. This image represents the model that the camera expects that it sees, and the localization method will use ICP to match to this image to the camera image in the next frame. This step is necessary to avoid localization drift that occurs when matching incoming frames only to the previous. Instead, this matches the new frame with a globally consistent model.

1 comment:

  1. Great blog. All posts have something to learn. Your work is very good and i appreciate you and hopping for some more informative posts. Deep Drawn Can

    ReplyDelete