Updated AutoCAD + Kinect for Windows samples

Thanks to some recent coverage on a Channel 9 blog (which I consider a great honour – I've been a huge fan of Channel 9 since its inception :-), I decided to get around to posting an update to the AutoCAD + Kinect samples I demonstrated at AU 2011.

While attending the recent hackathon, I spent a fair amount of time porting my AutoCAD-Kinect integration samples from the Beta 2 of the Microsoft Kinect SDK (for the Kinect for Xbox 360 sensor) to the released SDK for the Kinect for Windows.

It was pretty impressive just how many of the Kinect APIs were broken between Beta 2 SDK and the final release (you can hear some additional commentary on this in this .NET Rocks episode). I understand the desire to get the API in shape prior to release – you really want to minimise any messy, legacy dependencies – but the full extent of the changes certainly took me by surprise. The good news is that there are a number of APIs that make point cloud creation simpler, for instance, so I was able to pull out some of the more icky code from the samples.

Anyway, here are the updated samples. If you'd like to diff the changes with the original version for the Beta 2 SDK, here these are, too, although be warned that I did take the opportunity to do some refactoring, particularly with respect to the speech recognition capability (I've now added some events to allow derived classes to add words to be recognised – previously it was all munged into base KinectJig class).

Point cloud in AutoCAD via Kinect

Aside from the complexity of the migration (which was thankfully minimised due to the fact I'd previously implemented a base class with much of the needed functionality), there were a few behaviours that still seem a little quirky: once started, the sensor now never seems to turn off, even after it has been stopped. You can unplug it, of course, but that doesn't seem quite right. And having the red light stay on makes me feel like I'm being watched – it reminds me of the glowing red eyes in Terminator 2 (gulp).

Also, at first the speech recognition seemed more flakey, but that no longer seems to be the case: I've been working on a command to let you digitise a spline by saying voice commands like "start", "point" and "end", but I didn't feel comfortable getting this working well at the hackathon (there's nothing more disturbing than someone waving their hands and shouting at their computer when you're trying to solve a tricky coding problem :-). Now that I've had some time in a more private coding environment, things seem to be working well enough.

I did follow the updated Kinect audio samples that implement a 4-second delay for speech recognition to initialise properly, so perhaps that helped, too.

I'll put the finishing touches on the spline creation sample – which I see as potentially handy when digitising real-world objects – and get that posted sometime in the coming weeks.

9 responses to “Updated AutoCAD + Kinect for Windows samples”

  1. Interesting how a company like Microsoft can spend tens or hundreds of millions on R&D, and end up having this happen:

    cnettv.cnet.com/control-your-desktop-wave-your-hand/9742-1_53-50124905.html

  2. Kean Walmsley Avatar

    Hi Tony,

    Yes - I got sent these two articles on Leap, this morning, so it's clearly creating some buzz:

    Leap Motion 3D: hands-free motion control, unbound
    Must-See Video: Gesture Control Accuracy Takes a Huge "Leap" Forward

    I expect Microsoft will end up shipping something to support finger-detection at some point (whether or not it needs a new sensor remains to be seen) - and apparently the 1.5 version of their Kinect SDK is due next month, so who knows what that will contain - but it's clear they haven't been quick to capitalise on the desktop control opportunity.

    It'll be interesting to see whether Leap lives up to its promise (the demo video looks very cool). I do wish the articles discussing the topic didn't always take such a critical view of Microsoft (to the point of being factually incorrect, in CNET's case), but I suppose that's the way of things.

    Kean

  3. Kean Walmsley Avatar

    It looks like it got released sooner than I'd heard/expected:

    Kinect for Windows: SDK and Runtime version 1.5 Released

    Enhancements include facial and seated skeletal tracking (but still no finger tracking).

    It seems Leap Motion won't be shipping anything until the end of the year, so we'll see if MS can get their act together, in the meantime.

    Kean

  4. I wouldn't put my money on MS and Kinect for PC's here, because its not really about the device or the fact that it's 100x more accurate than anything else out there right now, it's mostly their software that makes it happen. From what I've seen and read, they are years ahead of Microsoft and everyone else, for that matter.

    Ignoring the $70 price tag for a moment, and considering only what that device is capable of (with supporting software) really puts it in a different class than Kinect and other gaming-oriented motion sensing systems.

    They also didn't take a proprietary route like Microsoft did at first with Kinect, and they now have the backing of 20,000 developers, and seem to understand that developer support is the key.

  5. Kean Walmsley Avatar

    Indeed. Ultimately Kinect wasn't targeted at close, desktop-oriented work (which really seems a sweetspot for depth-sensing technology), and it doesn't look like they're shifting gears quickly enough to address the need.

    I'd be very happy to see a player such as Leap deliver what's needed for short-range NUI development, providing they manage to place adequate focus on building and servicing the eco-system.

    They've pulled off a great launch - it'll be interesting to see how they manage things from here.

    Kean

  6. I just wish someone would have told them before they made their presentation, that purely out of coincidence, their device just so happens to be suitable for use as a personal 3D scanner/digitizer.

    I mean, it was tracking a pair of chopsticks in real-time

  7. Kean Walmsley Avatar

    Well, maybe. From what I understand it's unlikely to be as good with arbitrary objects (there's some kind of reference model being matched). Although that doesn't explain the chopstick, of course, unless there's a model of that being matched for the demo.

    Either way the tech is very cool - it just remains to be seen how we'll all be able to use it.

    Kean

  8. Hi, I really like this project, Kean could you please explain me the way to run this kinect's command in my autocad too? Thank you.

    1. Kean Walmsley Avatar

      Hi Rudy,

      As explained by email, I'm not in a position to help you through this. It's a .NET project that would need building with Visual Studio, and may well have migration issues to deal with for recent releases.

      Please post to the AutoCAD .NET forum if you need help.

      Regards,

      Kean

Leave a Reply

Your email address will not be published. Required fields are marked *