tag:blogger.com,1999:blog-21249462320030480952024-03-05T16:55:26.834-08:00Voxel Map Construction and Renderingdkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-2124946232003048095.post-69357867052174792172015-05-02T19:10:00.003-07:002015-05-02T19:10:43.451-07:00Summary<br />
<div class="separator" style="clear: both; text-align: center;">
<br /><iframe width="320" height="266" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/ktRXBwJ4yKw/0.jpg" src="https://www.youtube.com/embed/ktRXBwJ4yKw?feature=player_embedded" frameborder="0" allowfullscreen></iframe></div>
dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com0tag:blogger.com,1999:blog-2124946232003048095.post-12051753085559833112015-04-19T10:40:00.002-07:002015-04-19T10:40:15.624-07:00SVO Cone Tracing<br />
I have implemented voxel cone tracing based on Cyril Crassin's GigaVoxels work for physically based rendering of a reconstructed scene. To my knowledge, this is the first use of cone tracing to render real camera data.<br />
<br />
Here is a raw camera image from a Kinect:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZS5BuusWssjJ7oqVL1GituyxsyTQkBZdC7t7eQVFCdT31SbPxSIrpudgdhYd2WcHU8U5fmJzkG5gBSof-JbeRsUo1lHqY4M7u91KW_ZkHGPAkGRPeW6-oeSF-C1lqecnUarmbv_XQuzTG/s1600/original.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZS5BuusWssjJ7oqVL1GituyxsyTQkBZdC7t7eQVFCdT31SbPxSIrpudgdhYd2WcHU8U5fmJzkG5gBSof-JbeRsUo1lHqY4M7u91KW_ZkHGPAkGRPeW6-oeSF-C1lqecnUarmbv_XQuzTG/s1600/original.png" height="301" width="400" /></a></div>
<br />
This is the reconstructed SVO rendered from the same camera view, but extracting voxels at 1 cm resolution and drawing them with instanced rendering through OpenGL. This is the typical approach in rendering reconstructed 3D maps.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGWBIBEpPAQSupk_qTwWQ6AYkkyRGR31oN7YBZwjiPsQ2R37UND018U3N5lxgZJQOTq2-pa8cqsy4NyC-5-RxWEuR5boq5DgKY3TXJWWvrUM8Oga3J-R5mQAV-4lvrgbHbfKiyOakAF9H0/s1600/voxels05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGWBIBEpPAQSupk_qTwWQ6AYkkyRGR31oN7YBZwjiPsQ2R37UND018U3N5lxgZJQOTq2-pa8cqsy4NyC-5-RxWEuR5boq5DgKY3TXJWWvrUM8Oga3J-R5mQAV-4lvrgbHbfKiyOakAF9H0/s1600/voxels05.png" height="297" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Now this is the same SVO render instead with cone tracing. The detailed texture of the hardwood floor is now captured in the rendered image. The ray-marching process seems to produce artifacts that step through the corner of the wall on the right side.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-dcF89Y0OYDB9Je30vkN47ku4mGpGh7a-UGhoZAiMsUA6vI5dm_1IrbUi7UO89bdL44N6FV3P8igIT-_jtaH2rV_GOGfyPgCrisPBREJVzWMJ_oycp-wCYp2qBVsNI2XDGqwCf0OqJEf_/s1600/cone_trace05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-dcF89Y0OYDB9Je30vkN47ku4mGpGh7a-UGhoZAiMsUA6vI5dm_1IrbUi7UO89bdL44N6FV3P8igIT-_jtaH2rV_GOGfyPgCrisPBREJVzWMJ_oycp-wCYp2qBVsNI2XDGqwCf0OqJEf_/s1600/cone_trace05.png" height="291" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
And here is another cone tracing image rendered from an alternate camera view:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihcwE-SGaeJ10lkzOaRx6zxJ-x3vgIBfHQP-z3qVt8u6_MFBd6eWTAkzXjfbwROx1tTMgAZGXhk2FkzSeE0RB4IS8FBhbfdOGkV1TO_Ec9kKC7ScgiCObepkt4nEShTwKhIr5gpKh49kHk/s1600/cone_trace05_different_view.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihcwE-SGaeJ10lkzOaRx6zxJ-x3vgIBfHQP-z3qVt8u6_MFBd6eWTAkzXjfbwROx1tTMgAZGXhk2FkzSeE0RB4IS8FBhbfdOGkV1TO_Ec9kKC7ScgiCObepkt4nEShTwKhIr5gpKh49kHk/s1600/cone_trace05_different_view.png" height="296" width="400" /></a></div>
<br />
Future work will use cone tracing to render artificial objects in these real 3D scenes for augmented reality with consistent lighting.<br />
<br />dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com0tag:blogger.com,1999:blog-2124946232003048095.post-45091335292205082652015-03-29T12:12:00.000-07:002015-03-29T12:12:46.554-07:00Performance Enhancement of Octree Mapping<br />
<u><strong>Key Resolution</strong></u><br />
<u></u><br />
Although the sparse octree does not allocate every node of the tree in memory, for unique identification of voxels I have been using Morton Codes to identify them.. Here is an example code:<br />
1 001 010 111<br />
The code starts with a leading 1 to identify the length of the code, and thus the depth in the tree. After that, the code is made up of a series of 3-bit tuples that indicate a high or low value on the binary split of the x, y, and z dimensions respectively. <br />
<br />
Using a 32-bit integer, this can represent 10 levels of depth in the tree. However, this is insufficient for mapping with a Kinect camera. The Kinect has a range of 1-10 meters in depth resolution, and doing a back of the envelope calculation for the horizontal resolution (480 pixels, 5m range, 43 degree field of view) shows that the camera will typically provide sub-centimeter resolution. A 10 meter edge volume can only achieve 1 cm resolution using 10 levels of depth. Therefore, I have transitioned to representing these keys with long integers (64-bit), which could represent more than kilometers of volume at millimeter precision, if needed.<br />
<br />
Here is a look at the enhanced resolution that can now be achieved. The first is an older image using ~2.5 cm resolution, and the second is enhanced to 4 mm resolution. This level of detail adequately captures the data provided by the Kinect camera.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3dA4Qx4S-ggnFineOqAjjMl7gAJLw-2UNXFwlxAitvcB3YIsLDEiOf3Zhn-DpZIVPnQQygFEIF01CQDvVGwMfrrwRBu9Z12XT63cN6SAilbNdoeKRFknaT91MmKBCUcksGSMPMvDZqxxy/s1600/Office.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3dA4Qx4S-ggnFineOqAjjMl7gAJLw-2UNXFwlxAitvcB3YIsLDEiOf3Zhn-DpZIVPnQQygFEIF01CQDvVGwMfrrwRBu9Z12XT63cN6SAilbNdoeKRFknaT91MmKBCUcksGSMPMvDZqxxy/s1600/Office.png" height="286" width="400" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgqpjbzJzkBfLlwIfwagsXqVe-nnkU_HWAVncSDE4F8yK2CvFOK5Nj3yudjjncbiJjN5DOf8RGthv-qkeLPfZ5rqTBVgWP2QOnw6Hle8y9q13XbBV-pLAZBjlNlmYwlleHIM-b25OP-3s3/s1600/4mmRes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgqpjbzJzkBfLlwIfwagsXqVe-nnkU_HWAVncSDE4F8yK2CvFOK5Nj3yudjjncbiJjN5DOf8RGthv-qkeLPfZ5rqTBVgWP2QOnw6Hle8y9q13XbBV-pLAZBjlNlmYwlleHIM-b25OP-3s3/s1600/4mmRes.png" height="280" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<strong><u>Key Sorting</u></strong><br />
<br />
Another modification that I made to use of these keys is the ordering of the 3-bit tuples. Originally, I found it very useful to represent the octree keys using the most significant tuple on the farthest to the right. The value was that the octree is almost always traversed starting from the root node. The child node from the root could be obtained using the key by performing a bitwise-AND with 0x7. For the next depth, the key could be modified with a shift right by 3 bits and repeating the same process. Max depth of the key is reached when the remaining code is 1.<br />
<br />
However, several parts of my algorithms involve getting unique keys from a list, truncated at different levels of depth. Starting with the least significant tuple makes truncation change the ordering of the keys. This means that every call to "unique" must be preceded by a call to "sort." Together, these make up the slowest part of the octree update process. The most straightforward way to improve this was to remove the need to sort repeatedly by switching the order of the tuples. While this is less convenient for octree traversal, I've found it to result in a worthwhile performance gain. Now, getting the most significant tuple from the key involves finding the position of the leading 1, then extracting the following 3 bits. Updating the key for the next depth requires subtracting the leading 1, along with these 3 bits, then adding the leading 1 at 3 bits to the right of its previous location. Although this is messier, the complexity can easily be encapsulated by a function.<br />
<br />
The process of traversing the octree to update inner nodes following changes to leaves uses this unique-key paradigm. It starts with a set of keys identifying the updated leaves. Each pass reduces the depth of the keys by a level, and updates the nodes at this higher level from updated child values. However, at higher levels of the tree, the originally unique nodes start to overlap to the same nodes. We benefit from periodically reducing to unique keys, and prefer not to re-sort each time. I've found that for an identical scene and resolution, this process could be reduced from 14-17 ms to 10-12 ms. <br />
dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com0tag:blogger.com,1999:blog-2124946232003048095.post-41499785526284974462015-03-08T17:18:00.002-07:002015-03-08T17:18:44.613-07:00Octree Map Construction<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
I've now implemented a set of CUDA kernels that can update an octree from an input point cloud. The steps involve:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<u>Update Octree with Point Cloud</u></div>
<div class="separator" style="clear: both; text-align: left;">
1.) Transform the points to map coordinates (based on a camera pose estimate)</div>
<div class="separator" style="clear: both; text-align: left;">
2.) Compute the axis-aligned bounding box of the points, using thrust reduction.</div>
<div class="separator" style="clear: both; text-align: left;">
3.) Resize the octree if necessary to ensure that it contains all points.</div>
<div class="separator" style="clear: both; text-align: left;">
4.) Push the necessary sub-octree to the GPU to ensure that it can be updated and extracted in parallel.</div>
<div class="separator" style="clear: both; text-align: left;">
5.) Compute the octree key for each point.</div>
<div class="separator" style="clear: both; text-align: left;">
6.) Determine which nodes will need to be subdivided, and how many new nodes will be created.</div>
<div class="separator" style="clear: both; text-align: left;">
7.) Create a new node pool that has enough memory to include the new nodes, and copy the old nodes into it.</div>
<div class="separator" style="clear: both; text-align: left;">
8.) Now that there is memory available, split the nodes determined in step 6.</div>
<div class="separator" style="clear: both; text-align: left;">
9.) Update the nodes with keys from step 5 and the color values from the input cloud.</div>
<div class="separator" style="clear: both; text-align: left;">
10.) Continually shift the keys upwards to determine which nodes have modified children, and re-evaluate those nodes by averaging their children.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<u>Extract Voxels from Octree</u></div>
<div class="separator" style="clear: both; text-align: left;">
1.) Compute keys for tree leaves. This involves a parallel pass for each tree depth and a thrust removal step.</div>
<div class="separator" style="clear: both; text-align: left;">
2.) Compute the positions and size for each key.</div>
<div class="separator" style="clear: both; text-align: left;">
3.) Extract the color of each key from the octree.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
At this point, I am only adding points to the map by counting the points as "hit" observations. The complete solution will involve recasting from the camera origin to each point, and using the points along the line as "misses."</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Here are a few screenshots of the map rendered with OpenGL, using instanced rendering of cubes with a Texture Buffer Object specifying the cube locations and colors. The first image is using voxels with an edge length of 2.5 cm. The second is the same viewpoint, but instead the octree is only updated at 10 cm resolution.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKjFQRfk-7aUUUGHUyqg-ID1nuLASIaxV6zJDPFRXtk2ebV6okWfQSbergV4Mr_dUjbmBVueMWffh64oy8mue5HPNexAbboy0RFdMONlMN7bfS12ZxfLORb08MDK34uTrPnicnm-WJJBet/s1600/MapConstruction2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKjFQRfk-7aUUUGHUyqg-ID1nuLASIaxV6zJDPFRXtk2ebV6okWfQSbergV4Mr_dUjbmBVueMWffh64oy8mue5HPNexAbboy0RFdMONlMN7bfS12ZxfLORb08MDK34uTrPnicnm-WJJBet/s1600/MapConstruction2.png" height="295" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZjx8xe3jMxOxUhfxFwMsdipt3G6wn7eNNP4WyKJScU4VR1_2OehSkLr8uoo-Cih4aC2qtI-ewY9BMd5dH50aA_zOyGqo1zN3heOMVitLm6DxNPmENoYLNNZ-jH8yupDeE8Ua_j2UTh09y/s1600/MapConstruction3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZjx8xe3jMxOxUhfxFwMsdipt3G6wn7eNNP4WyKJScU4VR1_2OehSkLr8uoo-Cih4aC2qtI-ewY9BMd5dH50aA_zOyGqo1zN3heOMVitLm6DxNPmENoYLNNZ-jH8yupDeE8Ua_j2UTh09y/s1600/MapConstruction3.png" height="297" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
This is another shot of the same room, though with a different camera angle with both the Kinect and the virtual camera. This shows more detail of objects sitting on a table.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3iUW0PXPY7O89lPMunXukK30PpQ-B0vHQwk4nYDYr1rnVyelLCQebqUbn9ffpze-7aBNLlpHTd19IUG5QFBDNfrK4PT47h4VmCR5ruylJX8feMBf-P44YP9JWJj_4vh7UFnn0iei8lZG9/s1600/MapConstruction.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3iUW0PXPY7O89lPMunXukK30PpQ-B0vHQwk4nYDYr1rnVyelLCQebqUbn9ffpze-7aBNLlpHTd19IUG5QFBDNfrK4PT47h4VmCR5ruylJX8feMBf-P44YP9JWJj_4vh7UFnn0iei8lZG9/s1600/MapConstruction.png" height="285" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Performance for this process is relatively fast compared to earlier work with camera localization. At this point, I have only a naive implementation of all kernels without any speed optimization. With an NVidia GTX 770 and a Kinect data stream of 640x480 pixels at 30 fps, these are the times for each step:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
1.) Update Octree with Point Cloud - 20 ms</div>
<div class="separator" style="clear: both; text-align: left;">
2.) Extract Voxels from Octree - 6 ms</div>
<div class="separator" style="clear: both; text-align: left;">
3.) Draw Voxels with OpenGL - 2 ms</div>
<br />dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com2tag:blogger.com,1999:blog-2124946232003048095.post-13742694259685054452015-02-14T11:19:00.000-08:002015-02-15T09:37:06.230-08:00Out-of-Core Octree Management<br />
The sparse octree used to represent a reconstructed 3D map will quickly grow too large to fit entirely in GPU memory. Reconstructing a normal office room at 1 cm resolution will likely take as much as 6-8 GB, but the Nvidia GTX 770 that I am using has only 2 GB.<br />
<br />
To handle this, I have developed an out-of-core memory management framework for the octree. At first glance, this framework is a standard stack-based octree on the CPU. However, each node in the tree has an additional boolean flag indicating whether the node is a subtree that is located in linear GPU memory. It also holds a pointer to its location on the GPU as well as its size. The stackless octree data is represented by 64-bits per node, using the same format as GigaVoxels. Here is a summary of the OctreeNode class's data elements:<br />
<br />
<pre>class OctreeNode {
//Flag whether the node is at max resolution,</pre>
<pre> //in which case it will never be subdivided
bool is_max_depth_;
//Flag whether the node's children have been initialized
bool has_children_;
//Flag whether the node's data is on the GPU
bool on_gpu_;
//Pointer to gpu data, if the node is GPU backed
int* gpu_data_;
//The number of children on the GPU
int gpu_size_;
//Child nodes if it is CPU backed
OctreeNode* children_[8];
//Data in the node if it is CPU backed
int data_;
};</pre>
<pre></pre>
<pre></pre>
<pre></pre>
<pre><span style="font-family: Times, Times New Roman, serif;">Next, I gave these node's an API that can push/pull the data to and from the GPU. The push method uses recursion to convert the stack-based data into a linear array in CPU memory, then copies the memory to the GPU. It avoids the need to over allocate or reallocate the size of the linear memory by first recursing through the node's children to determine the size of the subtree. The pull method copies the linear memory back to the CPU, then uses it to recursively generate it as a stack-based structure.</span></pre>
<pre></pre>
<pre><span style="font-family: Times, Times New Roman, serif;">It's worth mentioning that it is preferred for the data to reside on the GPU, as all of the update and rendering passes are going to involve parallel operations with data that is also in GPU memory. We only want to pull subtrees to the CPU when we run low on available GPU memory. To do this, I added a GPU occupancy count for the octree as a whole. When this exceeds a fraction of available memory, subtrees of the GPU memory need to be pulled back. </span></pre>
<pre></pre>
<pre><span style="font-family: Times, Times New Roman, serif;">I am working on a Least Recently Used (LRU) approach where all methods operating on the tree must input an associated bounding box of the area that they will affect. First, this allows us to make sure that the entire affected volume is currently on the GPU before attempting to perform the operation. The octree will also keep a history of the N most recently used bounding boxes. When space needs to be freed, it will take the union of these stored bounding boxes and pull data that lies outside of this region back to the CPU.</span></pre>
<pre></pre>
<pre><span style="font-family: Times, Times New Roman, serif;">This initial approach may need to be improved in the future. For one thing, our use case involves two independent camera poses, one for updating the map and one for rendering it. The bounding boxes associated with these two cameras can be separated spatially, but the method will create a single bounding box that will also encompass the space between them. A more advanced method would first cluster the bounding boxes, and then perform a union operation on each cluster. Another issue is that this method will create a tight box around the cameras. If they are moving, it is possible that they will quickly move outside of the bounding box and require memory to be pulled back. One way to handle this would be to predict the future motion of the cameras.</span></pre>
dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com0tag:blogger.com,1999:blog-2124946232003048095.post-74536490296072241842015-02-07T11:15:00.000-08:002015-02-07T11:15:15.380-08:00ICP Localization<br />
I have implemented the ICP algorithm that was described in <a href="http://research.microsoft.com/pubs/155378/ismar2011.pdf">http://research.microsoft.com/pubs/155378/ismar2011.pdf</a>. At this point, I am only matching two consecutive raw frames to compute a camera motion between them. In the coming weeks, I will be matching new raw frames to a frame generated from a reconstructed 3D map. This should drastically reduce camera pose drift.<br />
<br />
My initial naive implementation was split into separate CUDA kernels as described in the paper. This implementation has an initial kernel that determines whether the points in the two frames are similar enough to be included in the cost function. This creates a mask that is used to remove points with thrust compaction. Next, a kernel computes a 6x6 matrix A and 6x1 vector b for each corresponding pair of points. These are both summed in parallel with thrust. The final transform update is computed on the CPU by solving A*x = b. For all kernels, I used a block size of 256. Here is pseudo-code for this implementation:<br />
<br />
------------------------------------------------------------------------------------------------------------------<br />
pyramid_depth = 3<br />
depth_iterations = {4, 5, 10}<br />
update_trans = Identity4x4<br />
<b>for</b> i := 1 <b>to</b> pyramid_depth <b>do</b><br />
this_frame = this_pyramid[i]<br />
last_frame = last_pyramid[i]<br />
<b>for</b> j := 1 <b>to</b> depth_iterations[i] <b>do</b><br />
<b> </b>mask = computeCorrespondenceMask(this_frame, last_frame)<br />
remove_if(this_frame, mask)<br />
remove_if(last_frame, mask)<br />
[A b] = computeICPCost(this_frame, last_frame)<br />
A_total = reduce(A)<br />
b_total = reduce(b)<br />
iter_trans = solveCholesky(A_total, b_total)<br />
<b> </b>applyTransform(iter_trans, this_vertices, this_normals)<br />
update_trans = iter_trans * camera_trans<br />
<b>end</b><br />
<b>end</b><br />
camera_pose = camera_pose * update_trans<br />
------------------------------------------------------------------------------------------------------------------<br />
<br />
Using an NVidia GTX 770, I found that this process was unbearably slow, taking {95, 76, 112} ms for the 3 pyramid depths respectively.<br />
<br />
I was able to shave off some of the time by optimizing the block sizes. Most of the kernels performed optimally with ~16-32 threads per block. This sped up to {59, 58, 106} ms.<br />
<br />
Another 10 ms total was saved by combining the 6x6 A and 6x1 b into a single 7x6 matrix Ab, which could be reduced with a single thrust call rather than 2 separate passes.<br />
<br />
I found that the majority of the time was being spent in the thrust reduction pass of summing all of the ICP terms. On average, each iteration of computing ICP cost terms and reducing them was taking ~15.5 ms. I found that loading the initial ICP cost kernel with part of the reduction job substantially sped up this process. Thus, rather than parallelizing the ICP cost kernel over N points, it could be parallelized over N/load_size threads. Each thread is now responsible for iterating and summing over load_size points, and thus the thrust reduction acts on data of length N/load_size. Since each thread acts on multiple points, it is no longer beneficial to precompute a mask and remove invalid correspondences. This is now part of the ICP cost kernel. With load_size =10, I found that the entire process could be reduced from ~15.5 ms per iteration to ~3.5 ms. This reduces the total time to {25, 19, 30} ms.<br />
<br />
Pseudo-code for the optimized algorithm is here:<br />
<br />
------------------------------------------------------------------------------------------------------------------<br />
pyramid_depth = 3<br />
depth_iterations = {4, 5, 10}<br />
update_trans = Identity4x4<br />
load_size = 10<br />
<b>for</b> i := 1 <b>to</b> pyramid_depth <b>do</b><br />
this_frame = this_pyramid[i]<br />
last_frame = last_pyramid[i]<br />
<b>for</b> j := 1 <b>to</b> depth_iterations[i] <b>do</b><br />
Ab = correspondAndComputeLoadedICPCost(this_frame, last_frame, load_size)<br />
Ab_total = reduce(Ab)<br />
iter_trans = solveCholesky(Ab_total[0:5,:], Ab_total[6,:])<br />
<b> </b>applyTransform(iter_trans, this_vertices, this_normals)<br />
update_trans = iter_trans * camera_trans<br />
<b>end</b><br />
<b>end</b><br />
camera_pose = camera_pose * update_trans<br />
------------------------------------------------------------------------------------------------------------------<br />
<br />
The overall framerate for the system has improved from 2 to 15 FPS.<br />
<br />
<br />dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com0tag:blogger.com,1999:blog-2124946232003048095.post-88682479636907620742015-01-18T10:14:00.004-08:002015-01-18T10:25:21.940-08:00System Diagram and Octree Map Update Deep-dive<br />
<h2>
<b>System Diagram</b></h2>
<div>
I have drawn a system overview diagram to summarize the system that I described in my last post.</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiAmxxDRn1CIFIQIYamIdw7eISnTb3SdVxTE1h2t1201IqVLx2-NU2euy69KWKCqFt3TOy12mUodzoEHATOSsEPWmzuVOu4AZ6SXW6eOJzYqDKR2QfW4sb5htc0dJ1QZKrGkZdjmVmt_b6/s1600/Octree-SLAM+System.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiAmxxDRn1CIFIQIYamIdw7eISnTb3SdVxTE1h2t1201IqVLx2-NU2euy69KWKCqFt3TOy12mUodzoEHATOSsEPWmzuVOu4AZ6SXW6eOJzYqDKR2QfW4sb5htc0dJ1QZKrGkZdjmVmt_b6/s1600/Octree-SLAM+System.png" height="292" width="400" /></a></div>
<div>
<b><br /></b></div>
<div>
The black arrows designate the flow of execution, while the blue arrows are to clarify the passage of data from the two render passes. The dotted lines indicate modules that are responsible for directly interfacing with hardware devices.</div>
<h2>
<b>Octree Map Updates</b></h2>
<div>
Here I will describe in more detail one of the complex sub-components of the system: the process of updating an octree map from an aligned RGB-D frame.</div>
<div>
<br /></div>
<div>
The process of updating the global map will be made up of several CUDA kernels parallelizing execution. Here is a description of each CUDA kernel in the process.</div>
<div>
<br /></div>
<div>
<u>1.) Determine if any vertex is outside of the current map</u></div>
<div>
<br /></div>
<div>
Prior to this step of the pipeline, we have computed the 3D position of each pixel in the camera frame with respect to the coordinates of the octree map. This kernel will parallelize over the vertices and determine if it is outside of the current map bounds. If it is, it computes how many additional layers of depth are required for it to be within the map, and writes it atomically to a global value if its value is greater than the one currently held. </div>
<div>
<br /></div>
<div>
A CPU stage will then increase the global map to have a new root node and additional layers to ensure that the map will be big enough to contain the observed frame. It may be worth keeping this entire step CPU-bound as it is will be compute-lite. However, the vertices will be on the GPU at this point, so the data would have to be copied back and forth between devices for CPU computation here.</div>
<div>
<br /></div>
<div>
<u>2.) Compute voxel keys for each vertex to create list of occupied cells.</u></div>
<div>
<br /></div>
<div>
This kernel will parallelize over each vertex and compute the Morton Code at the maximum depth of the octree.</div>
<div>
<br /></div>
<div>
A Morton Code is a condensed form representing a key within an octree. Each depth of the tree is represented using an additional 3 bits, as it is binary in the x, y, and z coordinates. </div>
<div>
<br /></div>
<div>
Example: For an octree centered at the origin with bounding box at (-1,-1,-1) and (1,1,1), the vertex (0.7, -0.2, 0.1) has the depth 1 Morton Code 101, and the depth 2 Morton Code 101110.</div>
<div>
<br /></div>
<div>
This step determines which cells in the octree have been observed as occupied in the current camera frame. The number of occupied cells is constant, and thus we can pre-allocate the memory needed to store this data and avoid any need for a global memory counter (as will be needed in the next kernel). </div>
<div>
<br /></div>
<div>
<u>3.) Compute voxel keys for each line segment from the camera position to each vertex and add then to a global list of unoccupied voxels.</u></div>
<div>
<br /></div>
<div>
This kernel is similar to the previous, though instead of computing a single Morton Code for one point, it computes all Morton Codes along the line segment from the camera origin point to the vertex. This step will not have a constant output size as it is impossible to know how many voxels a line segment will intersect. Thus, we will need to conservatively allocate memory for the output, and use a global memory index counter that is updated atomically.</div>
<div>
<br /></div>
<div>
The occupied voxels will be calculated by using the 3D extension of Bresenham's line algorithm, and the Morton Code will be computed for each voxel center.</div>
<div>
<br /></div>
<div>
<u>4.) Remove duplicate occupied and unoccupied voxels from the lists, and store multiplicities.</u></div>
<div>
<br /></div>
<div>
An upcoming step is going to involve parallelizing over voxels to be updated. However, many of the threads from the previous stages will involve updating the same voxels due to the close proximity of the rays originating from the same camera position. </div>
<div>
<br /></div>
<div>
In this step, occupied cells should average the color value of all pixels when performing this compaction.</div>
<div>
<br /></div>
<div>
To make this step more efficient by avoiding multiple threads atomically updating the cells, we should first consolidate these updates into a list of only unique voxels. We then also need an additional list of integer counts that represent the number of times observed and unobserved in this frame.</div>
<div>
<br /></div>
<div>
<u>5.) Remove all unoccupied voxels that are in the occupied list.</u></div>
<div>
<br /></div>
<div>
OctoMap found that it is necessary to avoid updating a voxel as both observed and unobserved in the same frame. They chose to abide by the rule that if any ray observes a cell to be occupied, then it cannot be observed also as unoccupied in that same frame. To do this, we next need to parallelize over the unoccupied cell list and remove them if they are in the occupied list.</div>
<div>
<br /></div>
<div>
It is unclear what the most efficient way to do this would be. It may be beneficial to first sort the occupied list so that it can be queried for a value using binary search. On the other hand, it might be most efficient to sort both lists and perform this update step serially on the CPU, copying data back and forth. It may be worth evaluating these two methods.</div>
<div>
<br /></div>
<div>
<u>6.) For all occupied and unoccupied voxels, update the map at the maximum depth.</u></div>
<div>
<div>
<br /></div>
</div>
<div>
This step involves parallelizing over all voxels in both lists, and either adding or subtracting the alpha channel by the multiplicity associated with the voxel in this update. Unobserved voxels will use subtraction, and observed voxels will use addition. </div>
<div>
<br /></div>
<div>
This alpha channel can then be converted to a probability that the voxel is occupied by multiplying by the probability of hit/miss and exponentiating (the alpha channel is a condensed form of log probability of occupation).</div>
<div>
<br /></div>
<div>
Voxels observed as occupied will also update its color value using the color from the original camera pixel.</div>
<div>
<br /></div>
<div>
<u>7.) Mip-map updates into inner layers of the octree map.</u></div>
<div>
<br /></div>
<div>
The last step involves updating the rest of the tree with a kernel execution pass for each layer of depth in the octree. In this step, each parent node will take on the mean value of its child nodes for RGBA.</div>
<div>
<br /></div>
<div>
It would be ideal to optimize this step by avoiding mip-mapping parts of the tree that were not updated in this frame. An additional pruning step that prunes redundant leaves from the tree, particularly for areas of space where the alpha channel is very small (very likely to be empty) may be beneficial here as well.</div>
<div>
<br /></div>
<div>
<br /></div>
dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com1tag:blogger.com,1999:blog-2124946232003048095.post-57289130669148012682015-01-09T16:32:00.000-08:002015-01-09T16:32:44.213-08:00System Overview<br />
<b>Week 1 Update:</b><br />
<br />
This week, I have been working on improved mouse/keyboard camera control to the code that I have been developing. I have also added the brick pool data structure described in GigaVoxels, which is used for trilinear interpolation in texture memory when rendering an octree through cone tracing.<br />
<br />
Here is a general outline for the system that I expect to be developing based on a merger of concepts from KinectFusion, OctoMap, and GigaVoxels. The steps of the pipeline are:<br />
<ol>
<li>Receive Image from RGB-D Camera</li>
<li>Compute Vertices/Normals for Each Pixel</li>
<li>Predict Real Camera Pose through ICP</li>
<li>Update Virtual Camera Pose with Keyboard Input</li>
<li>Transfer CPU/GPU Octree Memory based on Camera Poses</li>
<li>Cast Rays from Predicted Camera to Camera Points to Update Octree Map</li>
<li>Render Virtual Camera on Screen</li>
<li>Render Predicted Camera Image to Texture for ICP in the Next Iteration</li>
</ol>
<div>
<b>1.) Receive Image from RGB-D Camera</b></div>
<div>
<b><br /></b></div>
<div>
This first stage provides raw sensor input that will be used to build a virtual map. The RGB-D camera will provide both color and depth information. I have purchased a <a href="http://structure.io/">Structure Sensor</a> that will be used as the primary device for this project. The Structure Sensor is a small active depth camera backed by Kickstarter that is intended for use with mobile devices. The camera provides 640x480 resolution at more than 30 fps. </div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6jAGcSB3HI1WwCardzZVv8QOt4Tb4iiNY3H4Vru6cux94aYNpqPnmqQnH4bfULXva2R4wJMX48pnnbPgBzwyRtBVmm4wdpjsXMVpAeJeUD2MSVLNjSmgchdDkjPF3_-doevVpi48kKyig/s1600/structure-sensor.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6jAGcSB3HI1WwCardzZVv8QOt4Tb4iiNY3H4Vru6cux94aYNpqPnmqQnH4bfULXva2R4wJMX48pnnbPgBzwyRtBVmm4wdpjsXMVpAeJeUD2MSVLNjSmgchdDkjPF3_-doevVpi48kKyig/s1600/structure-sensor.png" height="244" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The Structure Sensor is a product of Occipital, Inc.</td></tr>
</tbody></table>
<div>
A very appealing feature is that Occipital, the company behind the device, is continuing to support the OpenNI project on Github for the device. OpenNI is the Open Natural Interface standard that is being developed as an abstract interface to these RGB-D devices, though support for it had ended a few years ago. It is nice to see Occipital pick up support of the project, and it is desirable to develop this project in a way that is abstracted from a particular device driver.</div>
<div>
<br /></div>
<div>
Interfacing to the device through OpenNI should only take a few days work. I will start that task next week as the device has just arrived.</div>
<div>
<br /></div>
<div>
<b>2.) Compute Vertices/Normals for Each Pixel</b></div>
<div>
<b><br /></b></div>
<div>
This is the easiest step in the pipeline. The camera stream only provides position and color information natively. Normals are necessary for both for localization and rendering. An early step of the pipeline will use CUDA to parallelize over each pixel in the camera image and compute the vertex for each point using the camera calibration matrix, then computing normals from cross products of adjacent vertices in each direction. An additional bilateral filtering step may be necessary here as well, in which case we will follow a similar method as outlined in KinectFusion.</div>
<div>
<br /></div>
<div>
<b>3.) Predict Real Camera Pose from ICP</b></div>
<div>
<br /></div>
<div>
In order to use the new camera image to update a world map, we must first predict the position and orientation of the camera relative to the world. This can be done iteratively, where a pose estimate is made in each frame relative to the previous only. For slow camera motions and fast frame rates, these motions will be very small. The net pose of the camera is the composition of transforms of motions between each frame since the start of the process.</div>
<div>
<br /></div>
<div>
KinectFusion makes these iterative predictions using Iterative Closest Point (ICP). This process selects pairs of corresponding points between data sets, then finds an affine transform that can be applied to one set that minimizes the sum of squared distances between all correspondence pairs. Part of what makes this a challenging problem is how to accurately choose corresponding pairs. KinectFusion does this by projecting both frames into the camera space, and points occupying the same 2D pixel location should match. </div>
<div>
<br /></div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwvlzCM3ilTIuY-Gof3arbGlbJjp-vNDDnvscg96u7H19VaYAPOmm8DgfwwhWcetwg8q-6pHRi6_6-l9GpPVfNTpufv_F2KxE_DdMHopMZucUIVjzaDygLyiWGF-h9NnB6Lh0_dlfyTBO2/s1600/Screen+Shot+2015-01-03+at+9.30.09+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwvlzCM3ilTIuY-Gof3arbGlbJjp-vNDDnvscg96u7H19VaYAPOmm8DgfwwhWcetwg8q-6pHRi6_6-l9GpPVfNTpufv_F2KxE_DdMHopMZucUIVjzaDygLyiWGF-h9NnB6Lh0_dlfyTBO2/s1600/Screen+Shot+2015-01-03+at+9.30.09+PM.png" height="50" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">KinectFusion finds a camera pose T that minimizes this quantity.</td></tr>
</tbody></table>
<br />
<div>
KinectFusion also attempts to minimize the distance between points only in the direction of the normal between the points. This is essentially performing a point-to-plane match that tends to be less noisy. This is because it is very unlikely that two scans will sample identical points on a surface, but it is likely that points will belong to the same surface.</div>
<div>
<br /></div>
<div>
KinectFusion throws out point correspondences where either the difference between normals, or the distance between vertices exceed a threshold. The original KinectFusion did not incorporate color, but we will include a threshold on the difference between color values as well.</div>
<div>
<br /></div>
<div>
The process of ICP in KinectFusion computes terms for each correspondence pair in parallel on the GPU, then compacts/sums before minimizing with Cholesky decomposition on the CPU. </div>
<div>
<br /></div>
<div>
<b>4.) Update Virtual Camera Pose with Keyboard Input</b></div>
<div>
<b><br /></b></div>
<div>
This is another very simple task in the pipeline. This week, I have developed sufficient keyboard and mouse camera control for this project. These camera controls query GLFW in each frame to determine the new camera pose, rather than a callback-based approach that we previously were using. It uses WASD keys to translate the camera origin in a direction based on its orientation. The mouse can rotate the camera by clicking and dragging. While dragging, the mouse disappears and cannot leave the screen. This allows the user to continuously rotate the camera without screen space limitations. Finally, the scroll wheel adjusts the projection zoom. This control scheme will be familiar to anyone who has played a modern PC first-person-shooter.</div>
<div>
<br />
<b>5.) Transfer CPU/GPU Octree Memory based on Camera Poses</b><br />
<br />
This will be one of the more challenging aspects of the project. This will likely require a stack-based octree data structure on the CPU, with an associated API that can send/receive portions of the tree to the GPU, converting to its stack-less structure. It will draw upon work from GigaVoxels that uses a Least Recently Updated (LRU) method for determining when data can safely be removed from active memory. This aspect of the project will require additional algorithmic development or research.<br />
<br />
<b>6.) Cast Rays from Predicted Camera to Camera Points to Update Octree Map</b><br />
<b><br /></b>
This step actually builds the octree map from camera color and depth. The update method will be heavily based upon OctoMap, though may include additional GPU optimization. This will cast a ray from the origin of the real camera to each point in a point cloud generated from the depth image. The points along the ray update the map with "miss" observations, and the end point will be updated with a "hit." This will decrease or increase the probability that a voxel is occupied by a value determined by a model of the camera. This model will contain a probability of hit and a probability of miss value, which will be determined experimentally. We will use this probability as the alpha channel of the voxel.<br />
<br />
<b>7.) Render Virtual Camera on Screen</b><br />
<b><br /></b>
At this point, we need to render the virtual camera to the screen. Remember, the virtual camera does not need to be at the same position and orientation in the world as the real camera. This step is essentially the reverse of the previous, initially it will cast a ray in the octree and accumulate color weighted by alpha channel until the total alpha reaches 1.<br />
<br />
An improved version of this step will use cone tracing. This approach will use higher levels of the octree as the ray steps further from the camera, behaving like a cone. This will provide global illumination effects and is based upon the work of GigaVoxels.<br />
<br />
<b>8.) Render Predicted Camera Image to Texture for ICP in the Next Iteration</b><br />
<b><br /></b>
The last step is identical to the previous in terms of functionality. However, instead of rendering from the virtual camera that a user is controlling on the computer, this will render an image from the predicted view of the actual camera. This image represents the model that the camera expects that it sees, and the localization method will use ICP to match to this image to the camera image in the next frame. This step is necessary to avoid localization drift that occurs when matching incoming frames only to the previous. Instead, this matches the new frame with a globally consistent model.<br />
<br /></div>
dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com1tag:blogger.com,1999:blog-2124946232003048095.post-85936214698616835332014-12-30T12:10:00.000-08:002015-01-02T12:28:13.706-08:00<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<b><span style="font-size: large;">Introduction to Simultaneous Localization and Mapping (SLAM) with RGB-D</span></b></div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
This independent study will be focused on building a mapping framework capable of generating detailed 3D maps from a high resolution range camera, such as Kinect. The objective is to merge desirable properties of many recent techniques to achieve concurrent map construction and virtual camera rendering of large scale and high resolution scenes. The resulting system would be useful for augmented reality applications, enabling indirect illumination of virtual objects in a real map while continuously updating that map with a low-cost camera. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<u>KinectFusion - </u>Developed by Microsoft Research in 2011, this is a method that can reconstruct 3D polygonal meshes from a range camera.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The key idea behind KinectFusion is that for a high framerate sensor (e.g. 20+ fps), the majority of information from one frame to the next is duplicated. The approach is to find a transform to match the data as best as possible to predict the motion of the camera. Once the pose of the camera is known, the sensor data at the new frame can be used to update the map. This achieves simultaneous localization and mapping (SLAM) without any additional inertial sensing.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxtlFqW5g3_bEo44icbu5mnHzPbEjiCMpX6cXYhhGRQ7KgAMh4JTzUSoNTWAZ8-vqOFsiTBW967X-QT1Y0WVRu1EG8jQzvZuAxKscGSPHbHNeUCzP6zSBdvX0qPhlPVdYF9ZGgzuRyc4r1/s1600/Screen+Shot+2014-12-30+at+12.54.45+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxtlFqW5g3_bEo44icbu5mnHzPbEjiCMpX6cXYhhGRQ7KgAMh4JTzUSoNTWAZ8-vqOFsiTBW967X-QT1Y0WVRu1EG8jQzvZuAxKscGSPHbHNeUCzP6zSBdvX0qPhlPVdYF9ZGgzuRyc4r1/s1600/Screen+Shot+2014-12-30+at+12.54.45+PM.png" height="182" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #1a1a1a; font-family: wf_segoe-ui_light, wf_SegoeUILight, wf_SegoeUI, 'Segoe UI Light', 'Segoe WP Light', 'Segoe UI', Segoe, 'Segoe WP', Tahoma, Verdana, Helvetica, Arial, sans-serif; line-height: 50px; text-align: start;"><span style="font-size: xx-small;">Image Credit: KinectFusion: "Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera", Sharam Izadi et al.</span></span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
RGB-D cameras provide enough information to provide 3D positions and surface normals, as well as color. This makes the process of iterative closest point (ICP), which attempts to align one frame to the next, far more reliable despite the additional computational cost. The hard problem is computing this fast enough to keep up with the 20+ fps framerate. If that rate cannot be maintained and frames are skipped, the space of possible transformations that must be searched to align the frames grows. This increases the computational burden, slowing the computation down even further and creating a vicious cycle that makes the whole thing fail. GPU computing that can exploit the parallelism of the computation was critical to achieve the speeds required to avoid this downward spiral.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDesy9eaJRA0Yx3pwQOQVoOpRgJtJ4UeO3S7l0HnrDiMDBHyNBWX9ceP5_QdUqjLL6hqKCbgheWD0q7KTrBhcjzjo3dOANBCP3A_bYFU9jKPP0aasofpFcI16c8XcOQ4d8x7J-hpXIUDRU/s1600/Screen+Shot+2014-12-30+at+12.54.11+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto; text-align: center;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDesy9eaJRA0Yx3pwQOQVoOpRgJtJ4UeO3S7l0HnrDiMDBHyNBWX9ceP5_QdUqjLL6hqKCbgheWD0q7KTrBhcjzjo3dOANBCP3A_bYFU9jKPP0aasofpFcI16c8XcOQ4d8x7J-hpXIUDRU/s1600/Screen+Shot+2014-12-30+at+12.54.11+PM.png" height="217" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div class="page" title="Page 1">
<div class="layoutArea">
<div class="column">
<span style="font-family: NimbusRomNo9L;"><span style="font-size: xx-small;">Image Credit: "Real-time large scale dense RGB-D SLAM with volumetric fusion", Thomas Whelan et al.</span></span><span style="font-family: 'NimbusRomNo9L'; font-size: 17.000000pt;"> </span></div>
</div>
</div>
</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
KinectFusion represents the map as a 3D voxel grid with a signed distance function representing the distance from a surface. The values are truncated to avoid unnecessary computations in free space. Building this grid is far more maintainable than storing a raw point cloud for each frame, as the redundancy both enables the sensor noise to be smoothed, and also avoids storing significant amounts of duplicate data. A standard RGB-D camera can generate several GB of raw data within only a minute, while the voxel grid can represent the same data with only a few MB.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEingwvyffDZrJhV3N2ROwQKPg4ZKKXdxiLUK7aszjpi_hkbJYhH5OHPkC1wWVyaAEU6hLwk-0FJ2MBdn4ynYApFqGUt9Ah4KBxHSR3o7THCzFf800gV8boHnL5lq3QHx0gie-N3rFp1JiH6/s1600/Screen+Shot+2014-12-30+at+12.58.11+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEingwvyffDZrJhV3N2ROwQKPg4ZKKXdxiLUK7aszjpi_hkbJYhH5OHPkC1wWVyaAEU6hLwk-0FJ2MBdn4ynYApFqGUt9Ah4KBxHSR3o7THCzFf800gV8boHnL5lq3QHx0gie-N3rFp1JiH6/s1600/Screen+Shot+2014-12-30+at+12.58.11+PM.png" height="119" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td class="tr-caption" style="font-size: 13px;"><div class="page" title="Page 1">
<div class="layoutArea">
<div class="column">
<span style="font-family: NimbusRomNo9L;"><span style="font-size: xx-small;">Image Credit: "Real-time large scale dense RGB-D SLAM with volumetric fusion", Thomas Whelan et al.</span></span></div>
</div>
</div>
</td></tr>
</tbody></table>
</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
Mapping larger scale scenes with KinectFusion becomes messy due to the fixed siz of the voxel grid. There are large scale implementations such as Kintinuous and KinFu that slide the center of the grid when the camera moves past a threshold in any direction. However, this then requires mesh extraction from the area of space that is no longer covered by the grid. Then, an additional scene graph must be maintained relating the grid to these meshes. And what happens if the camera moves back into this area? Recent extensions of the KinectFusion approach have been attempting to resolve all of these undesirable consequences of using a single resolution voxel grid.</div>
<div>
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBO_CLl2kaBJJmOHiu7UyQzZVR6-FMjZPpIs3W45tpz5E7kSfxw_GILM58EvJvYw1bAcDVqy2V4L-tPmK8Z7B-iVCdsTYiMm7Joszs5wYoXbFIC-arVdOF20Lfb00iscJFipym0fPfYOoG/s1600/Screen+Shot+2014-12-30+at+12.53.39+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBO_CLl2kaBJJmOHiu7UyQzZVR6-FMjZPpIs3W45tpz5E7kSfxw_GILM58EvJvYw1bAcDVqy2V4L-tPmK8Z7B-iVCdsTYiMm7Joszs5wYoXbFIC-arVdOF20Lfb00iscJFipym0fPfYOoG/s1600/Screen+Shot+2014-12-30+at+12.53.39+PM.png" height="234" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td class="tr-caption" style="font-size: 13px;"><div class="page" title="Page 1">
<div class="layoutArea">
<div class="column">
<span style="font-family: NimbusRomNo9L;"><span style="font-size: xx-small;">Image Credit: "Real-time large scale dense RGB-D SLAM with volumetric fusion", Thomas Whelan et al.</span></span></div>
</div>
</div>
</td></tr>
</tbody></table>
</td></tr>
</tbody></table>
Additional recent work with Kintinuous has shown the ability to perform loop closure of revisiting scenes previously viewed, as well as colored voxel grids, which requires considerably more data and thus a smaller grid size. This can be an issue for longer range cameras with high resolution.<br />
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Advantages</div>
<div style="text-align: left;">
</div>
<ul>
<li>Update Speed (Parallel)</li>
<li>High-Resolution</li>
</ul>
<br />
<div style="text-align: left;">
Disadvantages</div>
<div style="text-align: left;">
</div>
<ul>
<li>Scalability (Fixed Grid Size)</li>
<li>Rendering (Extract Mesh)</li>
<li>Static Scenes Only</li>
</ul>
<br />
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<u>OctoMap - </u> This is a commonly used framework in the robotics community for building environmental maps from a variety of noisy sensors (LIDAR, Stereo).</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibfNCnblU9AjqF0LnQQcMhQgk4Uve48rNeCHH0iWKYYEDSnZSpwfKfyk0efWKyVhnrl7XuzoryuPHAg-O08Le1xEsQYRD4jFyzVgz9FS4HNL18BAD3P3v-1Cj61c2_dtOXrm2-TUyduyU8/s1600/Screen+Shot+2014-12-30+at+1.07.42+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibfNCnblU9AjqF0LnQQcMhQgk4Uve48rNeCHH0iWKYYEDSnZSpwfKfyk0efWKyVhnrl7XuzoryuPHAg-O08Le1xEsQYRD4jFyzVgz9FS4HNL18BAD3P3v-1Cj61c2_dtOXrm2-TUyduyU8/s1600/Screen+Shot+2014-12-30+at+1.07.42+PM.png" height="125" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees", Armin Hornung et al.</span></td></tr>
</tbody></table>
<br />
OctoMap uses an octree data structure to natively handle multiple levels of resolution. This provides a pointer-based structure to avoid allocating memory for empty space, unlike the voxel grid of KinectFusion that allocates memory but leaves it empty. The data structure provides a convenient method for providing additional data such as color and timestamps without additional bookkeeping.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyoZOqaBaMlJzoCJm2nrUwrq52e5xx6qibZg4X26yehnC_WAYpytHavsot6js7yzcDBGK4gwj0_K96Rk1PSL-Iz7dpsaieEDTBfn9mwSpbooYt4MqIo43x7bEkQ9LnPte_G9tH5Sg2lbN3/s1600/Screen+Shot+2014-12-30+at+1.08.29+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyoZOqaBaMlJzoCJm2nrUwrq52e5xx6qibZg4X26yehnC_WAYpytHavsot6js7yzcDBGK4gwj0_K96Rk1PSL-Iz7dpsaieEDTBfn9mwSpbooYt4MqIo43x7bEkQ9LnPte_G9tH5Sg2lbN3/s1600/Screen+Shot+2014-12-30+at+1.08.29+PM.png" height="226" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees", Armin Hornung et al.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
This framework does not explicitly include camera localization, and requires an externally provided camera prediction and update before each new camera frame is integrated into the map. Many approaches do this with a completely separate estimate using inertial sensing and dead reckoning. Others register new scans against the map, providing filtered estimates with methods such as a particle filter.</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYW757VK9BqBIbQW6XhuTUW0kQNkzSl_F6-uecntBqAMcoh3sWRmmtIxoTld7R4mv2_lArAw22CIoLuaUXnjUfwXEk3oKUUNl4NOUoMlz_k7KNLb2TZ8je0TewIR5zPX2KF1RKFscTf1se/s1600/Screen+Shot+2014-12-30+at+1.11.38+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYW757VK9BqBIbQW6XhuTUW0kQNkzSl_F6-uecntBqAMcoh3sWRmmtIxoTld7R4mv2_lArAw22CIoLuaUXnjUfwXEk3oKUUNl4NOUoMlz_k7KNLb2TZ8je0TewIR5zPX2KF1RKFscTf1se/s1600/Screen+Shot+2014-12-30+at+1.11.38+PM.png" height="115" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees", Armin Hornung et al.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
OctoMap is a probabilistic framework where the log-odds of occupancy are stored in the octree. Each sensor is assigned a probability of hit and miss that represent the noise of the sensor. Nodes in the tree are updated by logging each point from a point cloud as a hit, and all points along the ray from the camera position to the point are logged as a miss. This process takes place serially on a CPU, looping over each point in each frame. This framework is most commonly used with LIDAR sensors, which have only a few points per scan which has little benefit from parallelization. An RGB-D sensor would provide millions of points per frame which should be parallelized.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNjhrNf4MBI74Po7WEfqg0uqdLlSUcHzM2ZM7c0PDZkVfSg-GOeisw2oHnBZRc1m5JzAWO6OQdSgtv4wRFBkP7F-J2uNZtWMNSRhWBC9vsNgz-hYZw0SA_o4MlWpe2bBWOtVhevT3u1zfv/s1600/Screen+Shot+2014-12-30+at+1.12.36+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNjhrNf4MBI74Po7WEfqg0uqdLlSUcHzM2ZM7c0PDZkVfSg-GOeisw2oHnBZRc1m5JzAWO6OQdSgtv4wRFBkP7F-J2uNZtWMNSRhWBC9vsNgz-hYZw0SA_o4MlWpe2bBWOtVhevT3u1zfv/s1600/Screen+Shot+2014-12-30+at+1.12.36+PM.png" height="160" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees", Armin Hornung et al.</span></td></tr>
</tbody></table>
<div style="text-align: left;">
The scalable nature of OctoMap enables very large maps to be generated, as moving outside of the map only requires creation of a new root node.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Advantages</div>
<div style="text-align: left;">
</div>
<ul>
<li>Scalability</li>
<li>Dynamic Scenes</li>
</ul>
<br />
<div style="text-align: left;">
Disadvantages</div>
<div style="text-align: left;">
</div>
<ul>
<li>Rendering (Extract Cube Centers/Sizes)</li>
<li>Slow Updates (Serial)</li>
<li>External Localization</li>
</ul>
<br />
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<u>GigaVoxels -</u> Cyril Crassin's Ph.D thesis provides a framework for out-of-core GPU sparse octree data management, and a voxel cone tracing method for physically based direct rendering of octree data.</div>
<div style="text-align: left;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjI89QwrgZmyZKJa5-H0wh4_RTQV9R6wAYLUjtMf7KCUF9tKuVLkB_NQf119waaC2VoxUjqlOgpLnS8Z6FpobTj1H-cO9YXdO_JdZXA0HUMQjz8kL3ZgMt0JKln-FJre6WmDJ1FXjLs9tYT/s1600/Screen+Shot+2014-12-30+at+1.26.10+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjI89QwrgZmyZKJa5-H0wh4_RTQV9R6wAYLUjtMf7KCUF9tKuVLkB_NQf119waaC2VoxUjqlOgpLnS8Z6FpobTj1H-cO9YXdO_JdZXA0HUMQjz8kL3ZgMt0JKln-FJre6WmDJ1FXjLs9tYT/s1600/Screen+Shot+2014-12-30+at+1.26.10+PM.png" height="306" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large and Detailed Scenes", Cyril Crassin.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
GigaVoxels is not a SLAM framework. It is related to this project because both of the aforementioned methods produce voxel-based data structures that must be translated into a triangulated structure to be rendered. GigaVoxels provides a cone-tracing framework that can render with indirect illumination effects without any monte carlo integration that is typically used for polygonal geometry.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjir6nC6t32gIHFvB-16oiZgoXe1pnme3UJiFd62Ca8HgeDL3vvQWlfjhaVmSKoECu8-8YYUeU4XSA76EWEawqc1LK_V6DPoCIfgM9LgXClwVuDH53cZY0ex9pWVXcwaL5lPjvw3UuuqfpU/s1600/Screen+Shot+2014-12-30+at+1.27.05+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjir6nC6t32gIHFvB-16oiZgoXe1pnme3UJiFd62Ca8HgeDL3vvQWlfjhaVmSKoECu8-8YYUeU4XSA76EWEawqc1LK_V6DPoCIfgM9LgXClwVuDH53cZY0ex9pWVXcwaL5lPjvw3UuuqfpU/s1600/Screen+Shot+2014-12-30+at+1.27.05+PM.png" height="124" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large and Detailed Scenes", Cyril Crassin.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
Voxel Cone Tracing uses texture interpolation at an octree depth associated with the cone's cross section. If all of the needed lighting information is incorporated into the octree, mip-mapping the values into the inner tree branches and texture interpolation performs the integration step inherently.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBDMYzWra8OrgX3YXTIjUnSOb9SLG3SdgeQzV9sDcKe1uQ_RCWZQ7ipNVmoF9BgstcEMe5FTT_nFmEKNlZ_Gi-5hLvBkfIwnjdYRWnY9T_SQBCbntjYfTpfuLBshymg54iGtiTq_rC0erF/s1600/Screen+Shot+2014-12-30+at+1.28.16+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBDMYzWra8OrgX3YXTIjUnSOb9SLG3SdgeQzV9sDcKe1uQ_RCWZQ7ipNVmoF9BgstcEMe5FTT_nFmEKNlZ_Gi-5hLvBkfIwnjdYRWnY9T_SQBCbntjYfTpfuLBshymg54iGtiTq_rC0erF/s1600/Screen+Shot+2014-12-30+at+1.28.16+PM.png" height="91" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large and Detailed Scenes", Cyril Crassin.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
This work also provides a robust stack-less GPU octree data structure that can be constructed, traversed, and updated in parallel. It even provides an out-of-core data management process that coordinates transfer of octree data between the CPU and GPU memory pools as needed for very large scenes.</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSyCaca_Vt1PepYEoho23ewCOSZZftK_6GgLsbPemEqjmTrB4SOvzMtSpKSuXVLaoTMfkjARLcnUY-qUvtEm5tGCmUGu6O2ze77tGIg3EnzuYgHsFKqLB7Pn1MKKXirVYO-UUrwMsW8RNs/s1600/Screen+Shot+2014-12-30+at+1.29.18+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSyCaca_Vt1PepYEoho23ewCOSZZftK_6GgLsbPemEqjmTrB4SOvzMtSpKSuXVLaoTMfkjARLcnUY-qUvtEm5tGCmUGu6O2ze77tGIg3EnzuYgHsFKqLB7Pn1MKKXirVYO-UUrwMsW8RNs/s1600/Screen+Shot+2014-12-30+at+1.29.18+PM.png" height="195" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Image Credit: "GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large and Detailed Scenes", Cyril Crassin.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The key limitation is that most of the computer graphics community is heavily invested in a triangle mesh process, which means that voxelization and octree construction must be performed frequently, which is a slow process. However, for SLAM applications it is more common to generate a voxel based geometry, so it is actually more convenient to render it from that structure.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Advantages</div>
<div style="text-align: left;">
</div>
<ul>
<li>Indirect Illumination</li>
<li>Level-of-detail</li>
<li>Out-of-core data management</li>
</ul>
<br />
<div style="text-align: left;">
Disadvantages</div>
<div style="text-align: left;">
</div>
<div>
<ul>
<li>Slow Voxelization of Polygonal Meshes</li>
<li>Slow(er) Rendering than Hardware Rasterization</li>
</ul>
</div>
<div>
<br /></div>
<div>
<u style="font-weight: bold;">Octree-SLAM -</u> This project will attempt to merge the desirable qualities of these systems into a localization, mapping, and rendering system useful for augmented reality applications. As RGB-D cameras become more widespread on smartphones, this work may provide app developers with a way to render realistic 3D worlds from the data.</div>
<div>
</div>
<div>
Potential Qualities</div>
<div>
<ul>
<li>Update Speed (Parallel)</li>
<li>High-Resolution</li>
<li>Scalability</li>
<li>Dynamic Scenes</li>
<li>Indirect Illumination</li>
<li>Level-of-detail</li>
<li>Out-of-core (Large/Dense Scenes)</li>
<li>No Meshing or Voxelization Steps</li>
</ul>
</div>
<div>
Follow progress of the source code on <a href="https://github.com/dkotfis/Octree-SLAM">Github</a>.</div>
dkotfishttp://www.blogger.com/profile/17725195113005950680noreply@blogger.com2