Kinect Fusion for Kinect for Windows 2

A few weeks ago I received the official retail version of Kinect for Windows 2 (having signed up for the pre-release program I had the right to two sensors: the initial developer version and the final retail version).

The Kinect for Windows v2 box

After some initial integration work to get the AutoCAD-Kinect samples working with the pre-release Kinect SDK, I hadn't spent time looking at it in the meantime: the main reason being that I was waiting for Kinect Fusion to be part of the SDK. The good (actually, great) news is that it's now there and it's working really well.

For those of you who attended my AU 2013 session on Kinect Fusion, you'll know that the results with KfW v1 were a bit "meh": it seemed to have throughput issues when integrated into a standalone 3D application such as AutoCAD, and the processing lag meant that tracking dropped regularly and it was therefore difficult (although not impossible) to do captures of any reasonable magnitude.

Some design decisions with the original KfW device also made scanning harder: the USB cable was shorter than that of Kinect for Xbox 360, weighing in at about 1.6m of attached cable with an additional 45cm between the power unit and the PC. So even with power and USB extension cables, you felt fairly constrained. I suspect the original KfW device was really only intended to work on someone's desk: it was only when Kinect Fusion came along that the need for a longer cable became evident (and Kinect Fusion only works with Kinect for Windows devices, of course).

Thankfully this has been addressed in KfW v2: the base cable is now 2.9m with an additional 2.2m between the box that integrates power and the USB connection. So there's really a lot more freedom to move around when scanning.

I have to admit that I wasn't very optimistic that Kinect Fusion would be "fixed" with KfW v2: apart from anything the increased resolution of the device – both in terms of colour and depth data – just means there's more processing to do. But somehow it does work better: I suspect that improvements such as making use of a worker thread to integrate frames coming in from the device – as well as using Parallel.For to parallelize big parts of the processing – have helped speed things up. And these are just the change obvious from the sample code: there are probably many more optimisations under-the-hood. Either way the results are impressive: with Kinect Fusion integrated into AutoCAD I can now scan objects much more reliably than I could with KfW v1.

Here's a sample shot of a section of my living room (with one of my offspring doing his best to stand very still):

Kinect Fusion in the living room

I was even able to do a fairly decent scan of a car (although it did take me some time to get a scan that looks this good, and it's far from being complete).

Kinect Fusion on a car

You certainly still have to move the device quite slowly to make sure that tracking is maintained during the scanning process – I still get quirky results when I'm too optimistic when scanning a tricky volume – and I'd recommend scanning larger objects in sections and aggregating them afterwards. But Kinect Fusion works much better than it did, and now much more comparably inside a 3D system than with the standalone 2D samples (as Kinect Fusion also renders images of the volume being scanned for 2D host applications).

I still have some work to do to tidy up the code before posting, but it's in fairly good shape, overall. I'm also hoping to provide the option to create a mesh – not just a point cloud – as Kinect Fusion does maintain a mesh of the capture volume internally, but we'll see how feasible that is given the size of these captures (2-3 million points are typical).

There's so much happening in this space, right now, with exciting mobile scanning solutions such as Project Tango on the horizon as well as seemingly every third project on Kickstarter (I exaggerate, but still). It's hard to know what the future holds for 3D scanning technologies such as Kinect Fusion that require a desktop-class system with a decent GPU, but it's certainly interesting to see how things are evolving.

  1. Hi!
    Im not an owner of a kinect sensor, but werry interested in buying one to do environmental scans for my job (as reference in CAD). I have tryed to understand whats the difference is between using Kinect Fusion and your AutoCAD implimentations. Can Kinect Fusion export the pointcloud, and Autocad import it? or does Kinect fusion not save the previously scaned pointcloud and just what it sees at the moment. (Can Kinect fusion export a 360 degree scan of a room for example? And do i understand you correctly that what you want to do is not use Kinect Fusion and directly import the pointcloud in AutoCAD?
    Thanks /Andreas

    1. The main Kinect Fusion app (which I should really refer to as "Kinect Fusion Explorer", which ships as a Kinect SDK sample) generally lets you create better scans - the 2D rendering of the volume is very efficient - but only allows exports of untextured .OBJ and .STL files. You would need to modify the code to support writing to a text file (a .xyz file containing points & RGB colours) which can then be indexed directly in AutoCAD 2014 or translated to a .RCP/.RCS file using ReCap Studio which can then be imported into AutoCAD 2015.

      Kinect Fusion aggregates multiple frames of depth/colour data, so you can map a volume from multiple sides. The volume you can scan is dependent on the size of your GPU's memory, so I wouldn't expect to scan an unlimited area, though.

      I've taken big chunks of code from the Kinect Fusion Explorer sample and have embedded them directly in AutoCAD, displaying a "point cloud" transiently during the scanning process rather than using the 2D rendered output. Which has its pros and cons, let's say. 🙂

      I hope this clarifies some points,

      Kean

      1. Thanks for the clarification! I hope you post some compiled files in the future 🙂
        Fantastisk work! Make the same for autodesk inventor 😉

      2. Hi Kean,

        If the option to export textured meshes is not there, you can always use VoxelHashing (google it for the github sources), or resort to photogrammetry (Pix4DMapper, PhotoScan).

        Although it's somewhat weird, since we have a full HD camera on the Kinect 2 which could be used for the texturing of the resultant mesh, not only for color-grading of the point clouds.

Leave a Reply to Attreyu Cancel reply

Your email address will not be published. Required fields are marked *