As I mentioned in the last post, I've been working on an application that brings across point clouds from Photosynth into AutoCAD.

Before we get into the details, I'd like to lay some of the groundwork for this series of posts by talking a little about the bigger picture: "reality". How about that for bigger picture? It doesn't get much bigger than that, unless you're working on the LHC at CERN. 😉

Reality is increasingly being captured in digital form (Google StreetView, Bing Maps, Photosynth) and augmented (just look at the cool iPhone apps available in this area, such as Layar, but that's really just the beginning: with wearable computing such as the SixthSense project and other such advances, the future is looking increasingly colourful :-).

I see this area – which in many ways just part of Human-Computer Interaction, as we break down the barriers between us and the computers we use – as being of definite relevance to people working in the design industry. Engineers performing inspections of built designs, capturing the current state, performing analysis and making suggestions of possible enhancements – all while looking at the object and seeing it overlaid onto their field of vision. Sounds like science fiction (and yes, I do read a lot of sci-fi), but it's no longer a quantum leap away.

Capturing reality is a big part of this, and it's an area that's becoming more and more accessible. On the one hand 3D scanning technologies, such as LIDAR, are becoming increasingly affordable: various types of device that create point clouds representing 3D models at an appropriate scale, depending on your need. Then there's Project Natal, which is going to enable gaming without any sort of controller by using "reality capture" to analyse movements and gestures. And then there are tools such as Photosynth, which take it down (or is that up?) a level and allow you to capture reality in 3D (and glorious technicolor) from a set of 2D photos.

Photosynth has some really clever imaging analysis algorithms (I admit this video is now a little old: now that Photosynth is a pure web service some of the details may have changed… I also suggest also checking Blaise Aguera y Arcas' TED talks) that determine, based on the shared locations of points on photographs taken from multiple angles, where a common point is in 3D space. The more pictures you provide, the better the chances of points being cross-referenced and added to the point cloud. A Photosynth which is "100% synthy" has all the pictures related in some way to it, but that doesn't necessarily mean a very large, granular point cloud. While AutoCAD is now capable of dealing with point clouds up to the 2 billion point range, Photosynth point clouds are mostly smaller than 200,000 points (from what I've seen, so far), which makes sense, as they're primarily in place to correlate and provide access to sets of photographs.

Even so, the point clouds that are available in Photosynth provide some very interesting possibilities: even if they're not incredibly detailed or dense, it's certainly possible to capture certain types of design using such a technique, simply by uploading a set of photos. The cost of digitising a real-world object into a 3D model is suddenly reduced to the cost of a digital SLR (and the same technique could very well be applied to frames from a digital video recording).

In the next post on this topic we'll start looking at the implementation of the Photosynth import application itself.

12 responses to “Reality check”

  1. Hi Kean,

    very inserting topic, as usual ;). cant wait to see the implementation of it, and the possibility of this.

  2. Fernando Malard Avatar
    Fernando Malard

    Kean,

    Did you see this?

    TechFest 2010: Microsoft ICE - Image Composite Editor ch9.ms/BFV8

    Cheers!

  3. Fernando Malard Avatar
    Fernando Malard

    ...and it is available for download here:

    research.microsoft.com/en-us/um/redmond/groups/ivm/ice/

    How cool is that?

  4. This does sound really interesting - I wasn't aware that Photosynth could export the point clouds users create; is this something you'll be covering as a fore-word on the code you've written? (I would guess not, but perhaps you have a link to how to do it that you can share - or should I just RTFM on Photosynth?)

  5. Very cool - thanks, Fernando.

    I'd seen the panoramas created using ICE appear on Photosynth.net - it's good to know where they're from and how they're created.

    While the demos they showed in this clip creating clean 2D images from video are cool (as well as the other stuff, of course) I'm overall more interested in the creation of 3D content from 2D (which just feels like a bigger problem and with much greater consequences for our industry).

    I'm really looking forward to seeing point clouds created from video footage, at some point.

    Kean

  6. Yes, I'll certainly be providing links to all the resources I found on this in a foreword to my post (even if I don't go through the format in detail).

    Kean

  7. That's excellent news, I can see this approach being really useful for those times when you can't get hold of the original plans for something. (and don't want to start the model from scratch)

    Simply take a bunch of "Synthy" photos of whatever it is you're trying to model, add them to Photosynth, download the point cloud, run it through your code and Voila! you have a 3D mesh to work on..?

  8. Fernando Malard Avatar
    Fernando Malard

    Kean,

    Are the point cloud posted along the pictures created by the user itself or do the website use some tool to analyze the images and then generates an estimate point cloud for it?

    I remember AutoCAD 2011 demo with a point cloud made by a 3D laser scanner of Mandalay Hotel. In this case, it was really reach in details and amount of points. What I think is tricky is how to determine the depth distances from a single point which is the camera position.

    Very interesting subject anyway... 🙂

  9. Fernando Malard Avatar
    Fernando Malard

    Alex,

    The "Voila" effect is what I'm trying to understand better. It is not that clear how the point cloud will be generated from the image set. I think it is not that simple as we would wish.

  10. Fernando,

    From a set of pictures that are taken from the same line (like driving by a building) you'd have a hard time determining depth, but if you take 10 pictures around an object from all the angles I think you should be able to determine the depth of the object, wouldn't you agree?

  11. Thanks Victor, you said it better than I could - I didn't mean to imply that "we" would be creating the point cloud ourselves. Some very clever people at Microsoft have already done that bit for us.

    I've been tinkering with Photosynth myself this evening, and managed this from some old holiday snaps.

    Granted, the point cloud on this isn't that great, but it does give a good impression of the mountains we were looking at, and it gives you an idea of what's achievable with a little more deliberate thought about the subject of the pictures.

  12. [QUOTE]
    I'm really looking forward to seeing point clouds created from video footage, at some point.

    Kean
    [QUOTE]

    http://www.photomodeler.com/products/pm-scanner.htm

    [QUOTE]
    PhotoModeler Scanner is a 3d scanner that provides results similar to a 3d laser scanner. This 3d scanning process produces a dense point cloud from photographs of textured surfaces of virtually any size. To compare PhotoModeler Scanner and other 3d scanning technologies read our white paper: "A New Way to 3D Scan. (http://www.photomodeler.com/downloads/wp_mdownload.htm)"
    [QUOTE]

    Applications and Examples: http://www.photomodeler.com/products/scanner/applications.htm

Leave a Reply to Alex Fielder Cancel reply

Your email address will not be published. Required fields are marked *