I’ve written a modification to Christoph Hausner’s SynthExport that will export the camera positions of your PhotoSynth, and create projection maps of the original images to retexture your model in high-res. Let me know how you find it!
Edit 1: I’ve just changed the way the composite materials chain; they now cascade which makes everything a little more straightforward.
Edit 2:Version 1.1 now works out 35mm focal length for you, so the process is entirely automatic.
Edit 3:Version 1.2 rotates portrait-format images within the 3DS material, so you do not need to rotate the actual image file.
I was excited to see Greg Downing’s post regarding camera projection mapping from PhotoSynth, but a little disappointed there was no working system made available. I’ve put something together on the back of Christoph Hausner’s SynthExport, that will recreate camera positions and set up camera projection maps with assigned images in 3DS Max. This makes it a fairly painless process to get high-res textured models.
As a continuation of the training material I authored for the Photogrammetry and Augmented Reality Workshop, here is a guide to using the PhotoSynth Toolkit for reconstructing 3D models from images or video. It’s much more straightforward than the Bundler method, but less powerful.
This package is now very old. (things have moved really fast!) I recommend using Changchang Wu’s VisualSFM, which works splendidly and continues to incorporate new research.
Last week I held a workshop on Photogrammetry and Augmented Reality. As preparation for the workshop, I took the fantastic BundlerMatcher tool by Henri Astre, and packaged in a few tools to make it as simple as possible to use.
Friend, Artist and aspiring Architect Melody Williams has just installed these chalk boxes in Camperdown Memorial Park. See if you can find them and have a draw.
The park is located in the centre of Newtown, Sydney’s hip, alternative suburb. A long wall down the length of it forms a great canvas for graffiti, and it is constantly being reworked. I’ve made a PhotoSynth of the wall, that lets you navigate along it and zoom in to areas of interest. As well as the images I’ve taken, there are a handful of Creative Commons Flickr images, and this lets you see how the graffiti has changed in the last few weeks. I’ll go back in a few weeks and add more images to the set, and hopefully capture some changed sections and overwritten marks.
View the PhotoSynth to navigate through the park (you will need to download the Silverlight plugin first-time).
In a similar vein to the image-matching against point-clouds that was demonstrated in the Bing Maps TED talk, PhotoCity is aiming to progressively construct a 3D scan of the world. To this end they have released a free iPhone app that displays current scans, and encourages users to compete to fill in gaps in the scans. Currently there is a limited number of locations available to add to, but you may start your own “seeds”, which will create new areas to improve. I’m thinking of committing to a University of New South Wales seed – any helpers?
Edit: Still waiting for my building seed to be processed from this morning. The project is very new, so hopefully the delay is because they have a backlog, and not because they have abandoned processing new locations all together.
Posting a little late, via Beyond the Beyond: A Museum of London Augmented Reality iPhone app that gives you access to a library of historical, geotagged images. While the screenshots have been meticulously constructed to be perfectly registered, having seen the level of precision mobile AR is up to at the moment this is going to take some work to reproduce. The results look a lot like a PhotoSynth match up of two images, which is something of a foreshadowing of the near future when these geotagged images will be more intelligently plugged into a point-cloud of the persistent architecture (the corners and cornices of those old buildings that haven’t changed for decades).
For the moment it’s a reminder of the growing body of located media, along with digitisation and making machine-legible of formerly fuzzy content of our world.
I’ve discussed (with examples of a reconstruction from Newcastle) some of the technologies for doing 3D reconstruction. PhotoSynth is by far the easiest way to get into this, but it is not as powerful as some other options.
For example, PMVS2 allows you to reconstruct a detailed 3D point-cloud, and takes as its input a collection of images, and parameters about the position and intrinsics of the cameras.
Edit: The focal length is now being calculated correctly, but there is a problem with the data – it is not processing correctly. The rotation matrix calculations are from a great matrix calculator and seem to be working fine, and the translation are just simply brought over directly from PhotoSynth. I don’t have a good way to plot these points myself, but I might ask a colleague to give it a try for me. Obviously I’m keen to get this up and running ASAP, so please comment for feedback and revision suggestions!
Edit: I am working on getting the correct format for piping PhotoSynths to PMVS.
This is so close to what will be a Singularity in mapping and crowd-sourced/community image and video sites. These sites already offer geo-tagging, but when the maps we are using actually understand the 3D composition of locations, matching up images and reconciling them with the current model will start happening.
Bing Maps seems to be at the stage (courtesy of the PhotoSynth tech) that they have a sparse point-cloud representation of New York, which they can match the Flickr images to. Sadly the experience demo’ed in the presentation is quite alien from my own exploration of Bing Maps 3D. 3D models are untextured, hand-created, and there is as yet no street-view facility.