I have implemented voxel cone tracing based on Cyril Crassin's GigaVoxels work for physically based rendering of a reconstructed scene. To my knowledge, this is the first use of cone tracing to render real camera data.
Here is a raw camera image from a Kinect:
This is the reconstructed SVO rendered from the same camera view, but extracting voxels at 1 cm resolution and drawing them with instanced rendering through OpenGL. This is the typical approach in rendering reconstructed 3D maps.
Now this is the same SVO render instead with cone tracing. The detailed texture of the hardwood floor is now captured in the rendered image. The ray-marching process seems to produce artifacts that step through the corner of the wall on the right side.
And here is another cone tracing image rendered from an alternate camera view:
Future work will use cone tracing to render artificial objects in these real 3D scenes for augmented reality with consistent lighting.
No comments:
Post a Comment