CameraExport: PhotoSynth to Camera Projection in 3DS Max


I’ve written a modification to Christoph Hausner’s SynthExport that will export the camera positions of your PhotoSynth, and create projection maps of the original images to retexture your model in high-res. Let me know how you find it!


Download the CameraExport app here:

GitHub of the CameraExport app:


Henri Astre has also added the functionality into the latest version of the PhotoSynth toolkit.

Previous Updates:

Edit 1: I’ve just changed the way the composite materials chain; they now cascade which makes everything a little more straightforward.

Edit 2:Version 1.1 now works out 35mm focal length for you, so the process is entirely automatic.


Edit 3:Version 1.2 rotates portrait-format images within the 3DS material, so you do not need to rotate the actual image file.


I was excited to see Greg Downing’s post regarding camera projection mapping from PhotoSynth, but a little disappointed there was no working system made available. I’ve put something together on the back of Christoph Hausner’s SynthExport, that will recreate camera positions and set up camera projection maps with assigned images in 3DS Max. This makes it a fairly painless process to get high-res textured models.


Converting PhotoSynths to Dense Point-Clouds

I’ve discussed (with examples of a reconstruction from Newcastle) some of the technologies for doing 3D reconstruction. PhotoSynth is by far the easiest way to get into this, but it is not as powerful as some other options.

For example, PMVS2 allows you to reconstruct a detailed 3D point-cloud, and takes as its input a collection of images, and parameters about the position and intrinsics of the cameras.

Helpfully, the latest version of SynthExport allows you to export the camera parameters from your PhotoSynth, for use in the more powerful reconstruction. I have written a javascript form that generates the two input files needed for the PMVS2 pipeline.

Edit: The focal length is now being calculated correctly, but there is a problem with the data – it is not processing correctly. The rotation matrix calculations are from a great matrix calculator and seem to be working fine, and the translation are just simply brought over directly from PhotoSynth. I don’t have a good way to plot these points myself, but I might ask a colleague to give it a try for me. Obviously I’m keen to get this up and running ASAP, so please comment for feedback and revision suggestions!

Edit: I am working on getting the correct format for piping PhotoSynths to PMVS.


Bing Maps connecting Flickr → Map → PhotoSynth


This is so close to what will be a Singularity in mapping and crowd-sourced/community image and video sites. These sites already offer geo-tagging, but when the maps we are using actually understand the 3D composition of locations, matching up images and reconciling them with the current model will start happening.

Bing Maps seems to be at the stage (courtesy of the PhotoSynth tech) that they have a sparse point-cloud representation of New York, which they can match the Flickr images to. Sadly the experience demo’ed in the presentation is quite alien from my own exploration of Bing Maps 3D. 3D models are untextured, hand-created, and there is as yet no street-view facility.

Camperdown Graffiti Wall deep zoom

I’ve made a “deep zoom collection” of the camperdown park graffiti wall, so that you can zoom into it and check the whole west-facing wall out in detail. This was made using PhotoSynth to reconstruct the wall, then my Meshlab to reconstruct a model, then the CameraExport app (I just made) to retexture the model in high-res in 3DS Max.

I can’t embed it, so you’ll have to link to the separate page. It requires the Silverlight plugin, and the wall starts at the very of top of the box. You can go fullscreen.

If your browser setup is not compatible with the Silverlight plugin, you can alternative view a reduced image of the wall reconstruction on Flickr.

Bundler Photogrammetry Package

This package is now very old. (things have moved really fast!)  I recommend using Changchang Wu’s VisualSFM, which works splendidly and continues to incorporate new research. 


Last week I held a workshop on Photogrammetry and Augmented Reality. As preparation for the workshop, I took the fantastic BundlerMatcher tool by Henri Astre, and packaged in a few tools to make it as simple as possible to use.

The Bundler Photogrammetry Package can be downloaded from A guide to using it is available at

Graffiti and Games

Friend, Artist and aspiring Architect Melody Williams has just installed these chalk boxes in Camperdown Memorial Park. See if you can find them and have a draw.

The park is located in the centre of Newtown, Sydney’s hip, alternative suburb. A long wall down the length of it forms a great canvas for graffiti, and it is constantly being reworked. I’ve made a PhotoSynth of the wall, that lets you navigate along it and zoom in to areas of interest. As well as the images I’ve taken, there are a handful of Creative Commons Flickr images, and this lets you see how the graffiti has changed in the last few weeks. I’ll go back in a few weeks and add more images to the set, and hopefully capture some changed sections and overwritten marks.

View the PhotoSynth to navigate through the park (you will need to download the Silverlight plugin first-time).

Scanning the World in 3D


In a similar vein to the image-matching against point-clouds that was demonstrated in the Bing Maps TED talk, PhotoCity is aiming to progressively construct a 3D scan of the world. To this end they have released a free iPhone app that displays current scans, and encourages users to compete to fill in gaps in the scans. Currently there is a limited number of locations available to add to, but you may start your own “seeds”, which will create new areas to improve. I’m thinking of committing to a University of New South Wales seed – any helpers?

Edit: Still waiting for my building seed to be processed from this morning. The project is very new, so hopefully the delay is because they have a backlog, and not because they have abandoned processing new locations all together.


History of Space

Posting a little late, via Beyond the Beyond: A Museum of London Augmented Reality iPhone app that gives you access to a library of historical, geotagged images. While the screenshots have been meticulously constructed to be perfectly registered, having seen the level of precision mobile AR is up to at the moment this is going to take some work to reproduce. The results look a lot like a PhotoSynth match up of two images, which is something of a foreshadowing of the near future when these geotagged images will be more intelligently plugged into a point-cloud of the persistent architecture (the corners and cornices of those old buildings that haven’t changed for decades).

For the moment it’s a reminder of the growing body of located media, along with digitisation and making machine-legible of formerly fuzzy content of our world.