OK, here goes: my first (public) attempt at integrating the brand new Kinect Fusion functionality – made available this week in v1.7 of Microsoft's Kinect for Windows SDK – into AutoCAD. There are still a few quirks, so I dare say I'll be posting an update in due course.

Kinect Fusion capture

As mentioned in the last post, I've been working on this for some time but can now show it publicly, as the required SDK capabilities have now been published. As part of this effort, I've gone ahead and made sure the other Kinect samples I've written for AutoCAD work with this version of the SDK: all can be found here.

Much of the work was clearly to integrate the appropriate Kinect API calls into an AutoCAD-resident jig, much in the way we've seen before when displaying/importing a single depth frame. Kinect Fusion introduces the idea of a reconstruction volume that gets gradually populated with data streamed in from a Kinect sensor, building up an underlying mesh that represents the 3D model.

AutoCAD is OK with meshes to a certain size, but I wanted to get at the raw point data, instead. The Kinect team has kindly provided the Reconstruction.ExportVolumeBlock() method for just this purpose – it's intended to populate an array with voxel data which you can interpolate trilinearly to extract model/mesh information (erk) – but I haven't yet been able to have it return anything but an array of zeroes. So the code is currently asking the Kinect Fusion runtime to calculate a mesh from the reconstruction volume and we then use the vertices from that mesh as points to display.

The typical Kinect Fusion sample makes use of a quite different technique: it generates a shaded view of the mesh from a particular viewpoint – the underlying API casts rays into the reconstruction volume – which is very quick. Calculating a mesh and extracting its vertices is slower – especially when we get into the millions of points – so we have to accept the responsiveness is going to be different.

And that's mostly OK: we simply drop incoming frames when we're already processing one, as otherwise we build up a queue of unprocessed frames leading to a significant lag between the movement of the sensor and the population of the reconstruction volume. But this also means that there's a much bigger risk of the Kinect Fusion runtime not being able to track the movement – as the time between processed frames is larger and so are the differences – at which point we receive "tracking failures".

Which ultimately means the user has to move the sensor really slowly to keep everything "on track". Here's a video that should give you a sense of the problem, as I attempt to capture a vase and an orchid on my dining table:

[I did edit the video to cut out some waiting as the points are fully imported at the end: the resulting point cloud has around 1.5 million points, so the current process of writing them to an ASCII file, converting this to LAS and then indexing the LAS to PCG is far too slow… this is something I am planning to streamline, incidentally.]

Here's a normal photo of the scene, to give you a sense of what I'm trying to capture:Vase and orchid on my dining table

During the video, you'll notice a number of tracking failures. When you get tracking failures you have four main options:

  1. Return the sensor to the position at which the tracking was last successful (to continue mapping).
  2. Cancel the capture by hitting escape.
  3. Complete the capture by clicking the mouse.
  4. Let the errors accumulate: when the count hits 100 consecutive errors (this is coded in the sample – you could disable this or change the threshold) the reconstruction will get reset.

I hope that at some point I'll be able to tweak the processing to make it sufficiently efficient to eliminate the problem of tracking being lost between frames. I also hope to be able to integrate colour into the point cloud: this isn't something that's directly provided by Kinect Fusion, but I expect there's some way to get there.

Here's the C# code for this implementation (you should download the complete samples – a repeat of the link from earlier in the post, in case you missed it – to see the code it relies upon):

using Autodesk.AutoCAD.EditorInput;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

using Microsoft.Kinect;

using Microsoft.Kinect.Toolkit.Fusion;

using System;

using System.Collections.Generic;

using System.Collections.ObjectModel;

using System.IO;

using System.Threading;

using System.Windows.Threading;

 

#pragma warning disable 1591

 

namespace KinectSamples

{

  public static class Utils

  {

    public static Point3dCollection

    Point3dFromVertCollection(

      ReadOnlyCollection<Vector3> vecs

    )

    {

      var pts = new Point3dCollection();

      foreach (var vec in vecs)

      {

        pts.Add(new Point3d(vec.X, vec.Z, -vec.Y));

      }

      return pts;

    }

 

    public static List<ColoredPoint3d>

    ColoredPoint3FromVertCollection(

      ReadOnlyCollection<Vector3> vecs

    )

    {

      var pts = new List<ColoredPoint3d>();

      foreach (var vec in vecs)

      {

        pts.Add(

          new ColoredPoint3d() { X = vec.X, Y = vec.Z, Z = -vec.Y }

        );

      }

      return pts;

    }

  }

 

  // A struct containing depth image pixels and frame timestamp

 

  internal struct DepthData

  {

    public DepthImagePixel[] DepthImagePixels;

    public long FrameTimestamp;

  }

 

  public class KinectFusionJig : KinectPointCloudJig

  {

    // Constants

 

    private const int MaxTrackingErrors = 100;

    private const int ImageWidth = 640;

    private const int ImageHeight = 480;

    private const ReconstructionProcessor ProcessorType =

      ReconstructionProcessor.Amp;

    private const int DeviceToUse = -1;

    private const bool AutoResetReconstructionWhenLost = true;

    private const int ResetOnTimeStampSkippedMillisecondsGPU = 3000;

    private const int ResetOnTimeStampSkippedMillisecondsCPU = 6000;

 

    // Member variables

 

    private Editor _ed;

    private SynchronizationContext _ctxt;

    private double _roomWidth;

    private double _roomLength;

    private double _roomHeight;

    private int _lowResStep;

    private int _voxelsPerMeter;

    private FusionFloatImageFrame _depthFloatBuffer;

    private Matrix4 _worldToCameraTransform;

    private Matrix4 _defaultWorldToVolumeTransform;

    private Reconstruction _volume;

    private int _processedFrameCount;

    private long _lastFrameTimestamp = 0;

    private bool _lastTrackingAttemptSucceeded;

    private int _trackingErrors;

    private int _frameDataLength;

    private bool _processing;

    private bool _translateResetPoseByMinDepthThreshold = true;

    private float _minDepthClip =

      FusionDepthProcessor.DefaultMinimumDepth;

    private float _maxDepthClip =

      FusionDepthProcessor.DefaultMaximumDepth;

 

    // Constructor

 

    public KinectFusionJig(

      Editor ed, SynchronizationContext ctxt,

      double width, double length, double height, int vpm, int step

    )

    {

      _ed = ed;

      _ctxt = ctxt;

 

      _roomWidth = width;

      _roomLength = length;

      _roomHeight = height;

      _voxelsPerMeter = vpm;

      _lowResStep = step;

      _processing = false;

      _lastTrackingAttemptSucceeded = true;

      _vecs = new List<ColoredPoint3d>();

    }

 

    private void PostToAutoCAD(SendOrPostCallback cb)

    {

      _ctxt.Post(cb, null);

      System.Windows.Forms.Application.DoEvents();

    }

 

    public override bool StartSensor()

    {

      if (_kinect != null)

      {

        _kinect.DepthStream.Enable(

          DepthImageFormat.Resolution640x480Fps30

        );

 

        _frameDataLength = _kinect.DepthStream.FramePixelDataLength;

 

        try

        {

          // Allocate a volume

 

          var volParam =

            new ReconstructionParameters(

              _voxelsPerMeter,

              (int)(_voxelsPerMeter * _roomWidth),

              (int)(_voxelsPerMeter * _roomHeight),

              (int)(_voxelsPerMeter * _roomLength)

            );

 

          _worldToCameraTransform = Matrix4.Identity;

 

          _volume =

            Reconstruction.FusionCreateReconstruction(

              volParam, ProcessorType, DeviceToUse,

              _worldToCameraTransform

            );

 

          _defaultWorldToVolumeTransform =

            _volume.GetCurrentWorldToVolumeTransform();

 

          ResetReconstruction();

        }

        catch (InvalidOperationException ex)

        {

          _ed.WriteMessage("Invalid operation: " + ex.Message);

          return false;

        }

        catch (DllNotFoundException ex)

        {

          _ed.WriteMessage("DLL not found: " + ex.Message);

          return false;

        }

        catch (ArgumentException ex)

        {

          _ed.WriteMessage("Invalid argument: " + ex.Message);

          return false;

        }

        catch (OutOfMemoryException ex)

        {

          _ed.WriteMessage("Out of memory: " + ex.Message);

          return false;

        }

 

        _depthFloatBuffer =

          new FusionFloatImageFrame(ImageWidth, ImageHeight);

 

        _kinect.Start();

 

        _kinect.ElevationAngle = 0;

 

        return true;

      }

 

      _ed.WriteMessage(

        "\nUnable to start Kinect sensor - " +

        "are you sure it's plugged in?"

      );

 

      return false;

    }

 

    public void OnDepthFrameReady(

      object sender, DepthImageFrameReadyEventArgs e

    )

    {

      if (!_processing && !_finished)

      {

        using (var depthFrame = e.OpenDepthImageFrame())

        {

          if (depthFrame != null)

          {

            DepthData depthData = new DepthData();

 

            // Save frame timestamp

 

            depthData.FrameTimestamp = depthFrame.Timestamp;

 

            // Create local depth pixels buffer

 

            depthData.DepthImagePixels =

              new DepthImagePixel[depthFrame.PixelDataLength];

 

            // Copy depth pixels to local buffer

 

            depthFrame.CopyDepthImagePixelDataTo(

              depthData.DepthImagePixels

            );

 

            // Process on a background thread

 

            Dispatcher.CurrentDispatcher.BeginInvoke(

              DispatcherPriority.Background,

              (Action<DepthData>)((d) => ProcessDepthData(d)),

              depthData

            );

 

            // Stop other processing from happening until the

            // background processing of this frame has completed

 

            _processing = true;

          }

        }

      }

    }

 

    // Process the depth input

 

    private void ProcessDepthData(DepthData depthData)

    {

      try

      {

        CheckResetTimeStamp(depthData.FrameTimestamp);

 

        // Convert the depth image frame to depth float image frame

 

        FusionDepthProcessor.DepthToDepthFloatFrame(

          depthData.DepthImagePixels,

          ImageWidth,

          ImageHeight,

          _depthFloatBuffer,

          FusionDepthProcessor.DefaultMinimumDepth,

          FusionDepthProcessor.DefaultMaximumDepth,

          false

        );

 

        bool trackingSucceeded =

          _volume.ProcessFrame(

            _depthFloatBuffer,

            FusionDepthProcessor.DefaultAlignIterationCount,

            FusionDepthProcessor.DefaultIntegrationWeight,

            _volume.GetCurrentWorldToCameraTransform()

          );

 

        if (!trackingSucceeded)

        {

          _trackingErrors++;

 

          PostToAutoCAD(

            a =>

            {

              _ed.WriteMessage(

                "\nTracking failure. Keep calm and carry on."

              );

 

              if (AutoResetReconstructionWhenLost)

              {

                _ed.WriteMessage(

                  " ({0}/{1})",

                  _trackingErrors, MaxTrackingErrors

                );

              }

              else

              {

                _ed.WriteMessage(" {0}", _trackingErrors);

              }

            }

          );

        }

        else

        {

          if (!_lastTrackingAttemptSucceeded)

          {

            PostToAutoCAD(

              a => _ed.WriteMessage("\nWe're back on track!")

            );

          }

          // Set the camera pose and reset tracking errors

 

          _worldToCameraTransform =

            _volume.GetCurrentWorldToCameraTransform();

          _trackingErrors = 0;

 

        }

 

        _lastTrackingAttemptSucceeded = trackingSucceeded;

 

        if (

          AutoResetReconstructionWhenLost &&

          !trackingSucceeded &&

          _trackingErrors >= MaxTrackingErrors

        )

        {

          PostToAutoCAD(

            a =>

            {

              _ed.WriteMessage(

                "\nReached error threshold: automatically resetting."

              );

              _vecs.Clear();

            }

          );

 

          Console.Beep();

 

          ResetReconstruction();

        }

 

        _points = GetPointCloud(true);

 

        ++_processedFrameCount;

      }

      catch (InvalidOperationException ex)

      {

        PostToAutoCAD(

          a =>

          {

            _ed.WriteMessage(

              "\nInvalid operation: {0}", ex.Message

            );

          }

        );

      }

 

      // We can now let other processing happen

 

      _processing = false;

    }

 

    // Check if the gap between 2 frames has reached reset time

    // threshold. If yes, reset the reconstruction

 

    private void CheckResetTimeStamp(long frameTimestamp)

    {

      if (0 != _lastFrameTimestamp)

      {

        long timeThreshold =

          (ReconstructionProcessor.Amp == ProcessorType) ?

            ResetOnTimeStampSkippedMillisecondsGPU :

            ResetOnTimeStampSkippedMillisecondsCPU;

 

        // Calculate skipped milliseconds between 2 frames

 

        long skippedMilliseconds =

          Math.Abs(frameTimestamp - _lastFrameTimestamp);

 

        if (skippedMilliseconds >= timeThreshold)

        {

          PostToAutoCAD(

            a => _ed.WriteMessage("\nResetting reconstruction.")

          );

          ResetReconstruction();

        }

      }

 

      // Set timestamp of last frame

 

      _lastFrameTimestamp = frameTimestamp;

    }

 

    // Reset the reconstruction to initial value

 

    private void ResetReconstruction()

    {

      // Reset tracking error counter

 

      _trackingErrors = 0;

 

      // Set the world-view transform to identity, so the world

      // origin is the initial camera location.

 

      _worldToCameraTransform = Matrix4.Identity;

 

      if (_volume != null)

      {

        // Translate the reconstruction volume location away from

        // the world origin by an amount equal to the minimum depth

        // threshold. This ensures that some depth signal falls

        // inside the volume. If set false, the default world origin

        // is set to the center of the front face of the volume,

        // which has the effect of locating the volume directly in

        // front of the initial camera position with the +Z axis

        // into the volume along the initial camera direction of

        // view.

 

        if (_translateResetPoseByMinDepthThreshold)

        {

          Matrix4 worldToVolumeTransform =

            _defaultWorldToVolumeTransform;

 

          // Translate the volume in the Z axis by the

          // minDepthThreshold distance

 

          float minDist =

            (_minDepthClip < _maxDepthClip) ?

            _minDepthClip :

            _maxDepthClip;

          worldToVolumeTransform.M43 -= minDist * _voxelsPerMeter;

 

          _volume.ResetReconstruction(

            _worldToCameraTransform, worldToVolumeTransform

          );

        }

        else

        {

          _volume.ResetReconstruction(_worldToCameraTransform);

        }

      }

    }

 

    protected override SamplerStatus SamplerData()

    {

      if (_vecs.Count > 0)

      {

        _points.Clear();

 

        foreach (var vec in _vecs)

        {

          _points.Add(

            new Point3d(vec.X, vec.Y, vec.Z)

          );

        }

      }

 

      ForceMessage();

 

      return SamplerStatus.OK;

    }

 

    public override void AttachHandlers()

    {

      // Attach the event handlers

 

      if (_kinect != null)

      {

        _kinect.DepthFrameReady +=

          new EventHandler<DepthImageFrameReadyEventArgs>(

            OnDepthFrameReady

          );

      }

    }

 

    public override void RemoveHandlers()

    {

      // Detach the event handlers

 

      if (_kinect != null)

      {

        _kinect.DepthFrameReady -=

          new EventHandler<DepthImageFrameReadyEventArgs>(

            OnDepthFrameReady

          );

      }

    }

 

    public Mesh GetMesh()

    {

      return _volume.CalculateMesh(1);

    }

 

    // Get a point cloud from the vertices of a mesh

    // (would be better to access the volume info directly)

 

    public Point3dCollection GetPointCloud(bool lowRes = false)

    {

      using (var m = _volume.CalculateMesh(lowRes ? _lowResStep : 1))

      {

        return Utils.Point3dFromVertCollection(

          m.GetVertices()

        );

      }

    }

 

    public List<ColoredPoint3d> GetColoredPointCloud(

      bool lowRes = false

    )

    {

      using (var m = _volume.CalculateMesh(lowRes ? _lowResStep : 1))

      {

        return Utils.ColoredPoint3FromVertCollection(

          m.GetVertices()

        );

      }

    }

 

    // Get a point cloud from the volume directly

    // (does not currently work)

 

    public Point3dCollection GetPointCloud2(bool lowRes = false)

    {

      var step = lowRes ? _lowResStep : 1;

      var res = _voxelsPerMeter / step;

      var destResX = (int)(_roomWidth * res);

      var destResY = (int)(_roomHeight * res);

      var destResZ = (int)(_roomLength * res);

      var destRes = destResX * destResY * destResZ;

      var voxels = new short[destRes];

 

      // This should return an array of voxels:

      // these are currently all 0

 

      _volume.ExportVolumeBlock(

        0, 0, 0, destResX, destResY, destResZ, step, voxels

      );

 

      var pitch = destResX;

      var slice = destResY * pitch;

      var fac = step / 100.0;

      var pts = new Point3dCollection();

 

      for (int x=0; x < destResX; x++)

      {

        for (int y=0; y < destResY; y++)

        {

          for (int z=0; z < destResZ; z++)

          {

            var vox = voxels[z * slice + y * pitch + x];

            if (vox > 0)//!= 0x80 && vox == 0)

            {

              pts.Add(new Point3d(x * fac, z * fac, -y * fac));

            }

          }

        }

      }

      return pts;

    }

 

    protected override void ExportPointCloud(

      List<ColoredPoint3d> vecs, string filename

    )

    {

      if (vecs.Count > 0)

      {

        using (StreamWriter sw = new StreamWriter(filename))

        {

          // For each pixel, write a line to the text file:

          // X, Y, Z, R, G, B

 

          foreach (ColoredPoint3d pt in vecs)

          {

            sw.WriteLine(

              "{0}, {1}, {2}, {3}, {4}, {5}",

              pt.X, pt.Y, pt.Z, pt.R, pt.G, pt.B

            );

          }

        }

      }

    }

 

    protected void ExportPointCloud(

      Point3dCollection pts, string filename

    )

    {

      if (pts.Count > 0)

      {

        using (StreamWriter sw = new StreamWriter(filename))

        {

          // For each pixel, write a line to the text file:

          // X, Y, Z, R, G, B

 

          foreach (Point3d pt in pts)

          {

            sw.WriteLine("{0},{1},{2},0,0,0", pt.X, pt.Y, pt.Z);

          }

        }

      }

    }

  }

 

  public class KinectFusionCommands

  {

    private const int RoomWidth = 3;

    private const int RoomHeight = 2;

    private const int RoomLength = 3;

    private const int VoxelsPerMeter = 256;

    private const int LowResStep = 4;

 

    private double _roomWidth = RoomWidth;

    private double _roomLength = RoomLength;

    private double _roomHeight = RoomHeight;

    private int _voxelsPerMeter = VoxelsPerMeter;

    private int _lowResStep = LowResStep;

 

    [CommandMethod("ADNPLUGINS", "KINFUS", CommandFlags.Modal)]

    public void ImportFromKinectFusion()

    {

      var doc =

        Autodesk.AutoCAD.ApplicationServices.

          Application.DocumentManager.MdiActiveDocument;

      var db = doc.Database;

      var ed = doc.Editor;

 

      // Ask the user for double information

 

      var pdo = new PromptDoubleOptions("\nEnter width of volume");

      pdo.AllowNegative = false;

      pdo.AllowZero = false;

      pdo.DefaultValue = _roomWidth;

      pdo.UseDefaultValue = true;

 

      var pdr = ed.GetDouble(pdo);

      if (pdr.Status != PromptStatus.OK)

        return;

 

      _roomWidth = pdr.Value;

 

      pdo.Message = "\nEnter length of volume";

      pdo.DefaultValue = _roomLength;

      pdr = ed.GetDouble(pdo);

      if (pdr.Status != PromptStatus.OK)

        return;

 

      _roomLength = pdr.Value;

 

      pdo.Message = "\nEnter height of volume";

      pdo.DefaultValue = _roomHeight;

      pdr = ed.GetDouble(pdo);

      if (pdr.Status != PromptStatus.OK)

        return;

 

      _roomHeight = pdr.Value;

 

      // Ask the user for integer information

 

      var pio =

        new PromptIntegerOptions("\nEnter voxels per meter");

      pio.AllowNegative = false;

      pio.AllowZero = false;

      pio.DefaultValue = _voxelsPerMeter;

      pio.UseDefaultValue = true;

 

      var pir = ed.GetInteger(pio);

      if (pir.Status != PromptStatus.OK)

        return;

 

      _voxelsPerMeter = pir.Value;

 

      pio.Message = "\nLow resolution sampling";

      pio.DefaultValue = _lowResStep;

 

      pir = ed.GetInteger(pio);

      if (pir.Status != PromptStatus.OK)

        return;

 

      _lowResStep = pir.Value;

 

      // Create a form to set the sync context properly

 

      using (var f1 = new Form1())

      {

        var ctxt = SynchronizationContext.Current;

        if (ctxt == null)

        {

          throw

            new System.Exception(

              "Current sync context is null."

            );

        }

 

        // Create our jig

 

        var kj =

          new KinectFusionJig(

            ed, ctxt,

            _roomWidth, _roomLength, _roomHeight,

            _voxelsPerMeter, _lowResStep

          );

 

        if (!kj.StartSensor())

        {

          kj.StopSensor();

          return;

        }

 

        var pr = ed.Drag(kj);

        if (pr.Status != PromptStatus.OK && !kj.Finished)

        {

          kj.StopSensor();

          return;

        }

 

        kj.PauseSensor();

 

        try

        {

          ed.WriteMessage(

            "\nCapture complete: examining points...\n"

          );

 

          System.Windows.Forms.Application.DoEvents();

 

          var pts = kj.GetColoredPointCloud();

 

          ed.WriteMessage(

            "Extracted mesh data: {0} vertices.\n",

            pts.Count

          );

 

          System.Windows.Forms.Application.DoEvents();

 

          kj.WriteAndImportPointCloud(doc, pts);

        }

        catch (System.Exception ex)

        {

          ed.WriteMessage("\nException: {0}", ex.Message);

        }

        kj.StopSensor();

      }

    }

  }

}

Despite some of the issues – that relate mainly to the fact we're trying to extract 3D data in real-time from the Kinect Fusion runtime – hopefully you can see that this is very interesting technology. If you have a Kinect for Windows sensor, you can also use the Kinect Explorer sample from the KfW SDK to create a mesh (an .OBJ or .STL) that you can then import into the 3D software of your choice. Very cool.

19 responses to “Kinect Fusion inside AutoCAD”

  1. Nice work Kean.
    I wonder if a person with geek skills was to get a Microsoft Surface Pro with as much hardware maxed as possible, one might be able to mashup AutoCAD and the Fusion into a mobile point cloud capturing system. It would be much cheaper than most solutions, and perhaps a higher geek factor. Granted the point clouds might not be as dense and accurate, but for spatial capturing, internals, and energy modeling it might just be enough.

    Cheers,
    Shaan

  2. Thanks for stopping by, Shaan!

    To do real-time capture with Kinect Fusion you need a fairly well-spec'ed system with a beefy GPU. You can drop back down to CPU-only mode if implementing a system that doesn't need to be real-time capable (e.g. a 3D photo-booth).

    I'm not at all sure how that fits in with the Surface Pro's specs, I have to admit (don't know enough about it). If you have one to send me, I'll give a try! 😉

    Cheers,

    Kean

  3. Shaan/Kean -

    I have to admit this is amazing. Would work great in conjunction with our Faro scanner :), talk about detailed areas. Personally I do have a Surface Pro, but highly doubt the system would be able to handle it with only a Core i5 and the integrated Intel graphics; however, a decent laptop with a beefed up mobile GPU (i.e. an nVidia Quadro), I don't see why this would not work for a "compact" mobile real-time scanning solution. Maybe I will pull my Kinect and make an attempt as I do want to see what my Pro is really capable of doing.

  4. Interesting - please keep us posted on anything you find, Roland.

    Kean

  5. Will do... Going to be a good project for me this weekend.

  6. Kean, I am not quite understanding what its doing.
    How does the Kinect sensor know its location? It must, so when you move it, the points are correctly "georeferenced".
    Maybe your new tagline should be "I'm bringin Geeky back." Amazing connections you are doing.

  7. The sensor does have an accelerometer, which I assume the Kinect Fusion runtime uses to help it decide how the sensor is being moved (along with the "sensed" data itself, which you can also use to calculate the new camera position).

    But yes - it's indeed all very clever. 🙂

    Kean

  8. Hi, This example is with Kinect for Windows or Kinect Xbox?

  9. I *think* you can still use Kinect for Xbox 360 with the Kinect SDK (and Kinect Fusion). I myself used Kinect for Windows.

    Kean

  10. Hi Kean,

    Nice work indeed, a very good example of the Kinect at its best. Thank you for your replies. They where very helpful.
    I have another question, when I try and call the KINFUS command in AutoCAD, after i enter the resolution sampling, I get the following error:
    Invalid operation: Unable to invoke library 'KinectFusion170_64.dll' and I can't figure out what to do or where to put the KinectFudion170_64 dll.
    I appreciate your commitment to this project and to this website. It's a very good information source for us.

    Thanks,
    Alex

  11. Hi Alex,

    Thanks for the kind words.

    I copied the DLL from "C:\Program Files\Microsoft SDKs\Kinect\Developer Toolkit v1.7.0\Redist\amd64" into the same folder as the DLL you NETLOAD into AutoCAD (as far as I recall).

    I hope this helps,

    Kean

  12. Hi Kean
    Thanks for your helpful work (Y)
    I am working on kinectFusion C++ and I would like to get the mesh object then use getVertices() method

    Could you please tell me what are the methods that I should call to obtain the mesh object?

    The steps that I have followed:
    - Initialize Kinect
    - Initialize KinectFusion
    - Call calculateMesh()

    Is there anything that I have to do before/after calling calculateMesh() ?

    Thanks,

  13. Kean Walmsley Avatar

    Hi Mariam,

    It depends what you mean by "Initialize KinectFusion". You need to create a reconstruction volume - as you can see in StartSensor(), above - and then this will get populated.

    Regards,

    Kean

  14. Yes exactly that's what I mean.
    The problem is calling getVertices() after creating the mesh failed for unknown reason.

    Thank you anyway 🙂

  15. Hello,

    from my experience all the math needed to successfully recreate the 3D mesh takes way to long on an ordinary home computer. You can dramatically decrease the number of Frames shut with the kinect. 9-12 should be enoug, than using the GPU to process the data roughly 10-30 times faster, still you will end up with hours of estimation and error correction. The best way is still to run that kind of computation on the cloud. The only sollution would be if the Kinect spit out magically Rotation and Translation values, i guess that kind of cam is comming soon , maybe somebody at Microsoft is allready working on it .

  16. dukekujo@gmail.com Avatar
    dukekujo@gmail.com

    Hi Kean,
    Newbie here. From the Kinect Fusion videos I've watched, their reconstruction algos works well. I thought a point cloud was an intermediate step on the way to a reconstruction. If that is the case, why request the point cloud from the reconstruction, rather than the reconstructed mesh? Does AutoCAD for some reason need the point clouds (a.k.a. vertices, a.k.a. voxels) instead of a mesh (a.k.a. faces, polygons or surfaces)? I must be missing something.

    By the way, youtube.com/watch?v=zzb_RQWrt6I is a good video which explains the reconstruction algo, which Microsoft graciously published.

    -Robert

  17. Kean Walmsley Avatar

    Hi Robert,

    Simple: AutoCAD deals less well with multi-million vertex meshes than it does with point clouds. And (secondarily) the tools for working with point clouds inside AutoCAD are improving steadily, release on release.

    Thanks for the link!

    Kean

  18. Hi Kean,
    first of all I want to thank you for this great project. I have tryed to get it workin on autocad 2016 with the v1 of the kinect, but I failed for so many hours. Do you think it's possible to get your Version 1.8 working on this newer system? I'm sorry for reopen this old post.

    Thank you and best regards

    Sönke

    1. Hi Sönke,

      The big problem will be bringing the point clouds in... when POINTCLOUDINDEX was dropped from AutoCAD - with the shift from PCG to RCP files - there wasn't a viable way to import XYZ points (from a text file) and attach the file inside AutoCAD.

      There was some discussion of the ReCap team providing a tool for creating RCP files from other formats... you may want to check in on the discussion groups to see what you can find out.

      Good luck!

      Kean

Leave a Reply to Alex Cancel reply

Your email address will not be published. Required fields are marked *