Kinect Fusion updated to include colour

A few weeks ago the Kinect SDK was updated to version 1.8. I'd been eagerly awaiting this update for one reason, in particular: aside from receiving some updates to provide more robust tracking – something that was very much needed – Kinect Fusion has now been updated to include realistic colours in the output.

There are some additional SDK enhancements, such as a background removal APIs (good for greenscreening) and HTML support (handy for interactive kiosks), but the ones that interest me most relate to Kinect Fusion.

There are a few new Kinect Fusion samples that I need to take a look at – one combines Face Tracking with Kinect Fusion for head capture, another enables multi-sensor support with Kinect Fusion – but I haven't had the chance to look at these, as yet.

The first order of business was to integrate colour into the existing AutoCAD-Kinect samples. This didn't prove to be too tricky, overall: I chose the path of least resistance, to fork the existing Kinect Fusion command implementation, so there's currently quite a bit of duplicate code. At some point I'll factor out shared functions or add to the class hierarchy, but the first step was to get the commands working with as few differences as made sense. That's what I've included, for now.

It's worth bearing in mind a few things: while there have been improvements to the robustness of Kinect Fusion's tracking algorithm, adding colour support has reduced the size of the volume you can realistically track, mainly because you need memory capacity on your GPU to track the additional per-vertex information. Which is completely understandable.

One reason I'm happy to have colour information, at last, is that it has also allowed me to identify some core issues with the previous version of the code: mainly because having colour makes it so much easier to troubleshoot the point cloud output. This means I'm actually going to be able to make more progress with the sample's development than I had in the past, so I fully expect another update to come between now and AU.

Here's a link to the new samples, and here's a quick screenshot of the results of the new KINFUSCOL command, which I used to capture a Dyson Hot+Cool that I bought a few months ago:

Kinect Fusion capture in colour

I'm pretty confident that I could have captured more of the object if I'd been more patient – the tracking is still a little flakey, given the fact we introduce latency by retrieving the points in 3D rather than consuming a rendered bitmap – but the results still look fairly good as they stand.

2 responses to “Kinect Fusion updated to include colour”

  1. Hi Kean, amazing stuff for sure. I keep wondering how close we are to some kind of consumer level lidar for things like mountain faces. All we have is lousy 10 meter density USGS surface info, yet we finally have programs like infraworks that could handle finer detail surfaces. I want to scan the lakes I fish at in the Sierras (near Bishop on the 395 in California). What is the closest technology to doing that? I mean without HD scanners and full survey equipment? The gatewing drone seems the best so far.

  2. Hi James,

    Drones (probably octocopters - better at hovering to capture vertical surfaces) with high-res cameras are likely the best bet today. You&#39d then use 3D reconstruction (e.g. ReCap Photo) to get a usable model.

    There have been interesting experiments in drone-carried lidar/laser scanning, but I don&#39t think it&#39s there yet for this kind of job.

    One day depth cameras may fit this need, too, but probably not structured light sensors such as Kinect 1.0 (more likely is a time-of-flight system such as that driving Kinect 2.0).

    Kean

Leave a Reply

Your email address will not be published. Required fields are marked *