Integrating Leap Motion and AutoCAD: Review

It's been a fun week of blogging about the Leap Motion controller, but now it's finally time to Leap to a conclusion (groan).

Now that's a Leap

We started the week by introducing the device and we then used it to navigate models, interact with AutoCAD commands and draw 3D geometry. Today's post is an attempt to summarise the impressions I've formed from working with the Leap Motion controller, and take a shot at predicting (no doubt inaccurately) where it's going to make most impact.

Let me start by saying that the Leap Motion controller does what it says on the tin. It's extremely responsive – even when used in "low power" mode in a virtualized OS environment – and highly accurate. I'd say it really does have the potential to revolutionise how we interact with computers on a casual basis or in a shared environment.

My feeling is that in the design space the controller will work really well for collaborative, virtual walkthroughs – I made a similar case for Kinect, for instance – and for applications that require more art than precision (sculpting apps such as Mudbox spring to mind).

There's a time and a place for any technology, though: one of the drawbacks I've found from using the device at my desktop over an extended period is that it tires your hands and arms very quickly (I'd hate to be a demo jock for Leap Motion, although on the positive side I'd probably end up with forearms like Popeye ;-). In fact, after half a day I even started to feel twinges of carpal tunnel coming on (oh, the irony).

Part of the issue I've found stems from the distance you need to maintain from the device for your hands and fingers to be detected: they really have to hover some distance from it. So you can't really lean your wrist on some kind of support and make subtle finger movements. And the lack of tactile feedback doesn't help, either: it's surprising how much difference this makes when attempting to control objects in a virtual space. All of which means it's fine for casual use but very different if you're modeling for 30-50 hours a week.

Don't get me wrong, though: I think this is a very interesting piece of technology that we'll hear a great deal about during the course of the next few years (we're starting to see announcements around funding, technology partnerships and their planned retail channel, for instance). Leap Motion has managed to get a lot of people very excited about their technology – especially developers, which will clearly be key to the product's success.

Beyond that, Leap Motion and other gesture control systems will continue their march through the Gartner hype cycle: we're apparently still 2-5 years away from the technology's "plateau of productivity" which doesn't strike me as very long, on balance.

I expect to see some interesting hybrid environments emerge – where you use the Leap Motion controller alongside your existing mouse & keyboard – as well as some specific cases where the device can streamline an existing, more complex input process.

From my side, I'd still like to pursue some additional experiments such as implementing the ability to "pick up" and manipulate AutoCAD objects in 3D space (which would be pretty tricky to get working well, I think). I'll certainly continue to report back to readers of this blog as I continue my investigations.

In the meantime – what do you think? How do you foresee the technology being used successfully, whether in the design industry or elsewhere?

photo credit: Better Than Bacon via photopin cc

7 responses to “Integrating Leap Motion and AutoCAD: Review”

  1. Constantin Gherasim Avatar
    Constantin Gherasim

    Hello Kean,

    I think that an approach that would solve the hand fatigue issue, would be the one where the device would be suspended somehow, facing the desktop surface, eventually combined with a second device, placed sideways and perpendicular to the desktop surface, facing the user's hands (optimal vertical positioning for this one remains to be determined).

    I believe that this kind of setup would enable the current "modus operandi" combined with the ability to read the user's hands and fingers gestures on the desktop surface, which would become thereby a kind of a larger "touch pad".

    By alternating spatial gestures with desktop surface level gestures (which would allow the user to lean his hands and wrists), the overall interaction would become less tiring, while still providing a lot of flexibility.

    And no, I don’t pretend royalties for this idea (joke aside, I'm convinced that the people at Leap Motion have thought already about something similar) 

    Regards,

    Constantin

  2. Hi Kean,

    I think that in short time this kind of technology will be obsolete as brain-computer interfaces get mature, actually there are a couple of commercial products from NeuroSky that show some potential. Other thing is the vision of a future with people stay sitting and connected to a computer with no apparent physical movements, but "working".

    Best Regards,
    Gaston Nunez

  3. wait a minute, those brain devices are not reading thoughts, they are like thermometers on an engine to try to tell what gear its in. I had not seen the neurosky mindwave before, so thanks for the mention of it, we'll see where it goes. I keep thinking, "what is the interface we would want, if we could have anything, excluding direct real thoughts from our brain." We have to have a clue how the brain works before the thought input thing. Isnt it funny though, that we don't understand our own brain...or light, gravity, mass...wow where are we anyway?

  4. oh, forgot to mention, what is the interface to? In the case of Civil engineers, we don't even have decent software yet for modeling real life things like underground utilities and roads/grading. An "ideal" interface to what we have now would not help much.

  5. Hi James,

    If we had an interface, and a responsive software, with 1 or 2 orders of magnitude better interface speed (bidirectional human-pc-human) that we actually have, you will not need the best software to do the work, you would be able to process images using paint, not photoshop.
    As for a decent Civil software, I think you and colleagues (ASCE,ICE, etc) need to elaborate and expose your needs, and then,collaborate directly with the software providers.

    Regards,

    Gaston Nunez

  6. Vincent Sta Clara Avatar
    Vincent Sta Clara

    How would it play with design review and marking up dwgs dwf's?

  7. I would personally think that using a tool like this for markup would be a little cumbersome. Technically it could work (although it would have to be an Autodesk rather than a 3rd party integration), but I'm not sure it's in the technology's "sweet spot".

    That said, with the right gestures implemented perhaps it'd be useable enough. I'd expect to see the ground broken elsewhere on that, though.

    Kean

Leave a Reply to Kean Walmsley Cancel reply

Your email address will not be published. Required fields are marked *