I’ve discussed (with examples of a reconstruction from Newcastle) some of the technologies for doing 3D reconstruction. PhotoSynth is by far the easiest way to get into this, but it is not as powerful as some other options.
For example, PMVS2 allows you to reconstruct a detailed 3D point-cloud, and takes as its input a collection of images, and parameters about the position and intrinsics of the cameras.
Helpfully, the latest version of SynthExport allows you to export the camera parameters from your PhotoSynth, for use in the more powerful reconstruction. I have written a javascript form that generates the two input files needed for the PMVS2 pipeline.
Edit: The focal length is now being calculated correctly, but there is a problem with the data – it is not processing correctly. The rotation matrix calculations are from a great matrix calculator and seem to be working fine, and the translation are just simply brought over directly from PhotoSynth. I don’t have a good way to plot these points myself, but I might ask a colleague to give it a try for me. Obviously I’m keen to get this up and running ASAP, so please comment for feedback and revision suggestions!
Edit: I am working on getting the correct format for piping PhotoSynths to PMVS.
Here are the side-by-side comparisons between the PhotoSynth output for the Newcastle Freeze Mob Synth, and my offline Bundler processing. PhotoSynth did better at finding camera positions, so there are more cameras in the view. (PhotoSynth is related to Bundler – they are built on the same code.) These models were produced using 3D Studio Max and two script generators; one that worked from the PhotoSynth camera parameters file, and the other that processed the Bundler output. The processing for Bundler output is guided by the Bundler Documentation, though something is obviously awry.
If you review the Synth you will see the rough path that the camera takes through the scene, and the PhotoSynth output is a perfect representation of this. Note that the camera does not really rotate as it passes through, it does a smooth dolly-shot across the scene.
In the image below, you can see that the cameras twist rotations, which was not accurate.
The Bundler output looks at least partly right because the translation section of the matrix is not part of the rotation matrix from the bundle.out file, so at least the cameras are in the correct shape, although the axis are switched.
Here’s a comparison of results from the Kermit example images for Bundler (view PhotoSynth):
The camera positions embedded in the Bundler point-cloud above are pretty much identical to those from the PhotoSynth output below.
But I can’t get the camera positions from the Bundler output file in the correct format (and thus get PhotoSynth output into the same format). Here’s what I get; for a start the cameras are on a different axis, and the positions are mirrored.
And here is the file output:
PhotoSynth camera positions
myTransform = Camera0.transform
myTransform.row1 = [0.244536772038, 0.969616206400956, 0.00679554308750407] myTransform.row2 = [0.303616062795044, -0.0699121872420001, -0.950226063885787] myTransform.row3 = [-0.920879500007504, 0.234428450405787, -0.311487155604] myTransform.row4 = [0.272829, -0.460082, -0.025248] Camera0.transform = myTransform
Camera1 = freecamera()
myTransform = Camera1.transform
myTransform.row1 = [-0.0533696273639999, 0.998427890386384, 0.0171298153411824]
myTransform.row2 = [0.376080835861616, 0.03598815695, -0.925887713201212]
myTransform.row3 = [-0.925048586709182, -0.0429720869627876, -0.37741027009]
myTransform.row4 = [0.194019, -0.0691434, 0.0326086]
Camera1.transform = myTransform
Camera2 = freecamera()
myTransform = Camera2.transform
myTransform.row1 = [-0.44396760373, 0.894615938164381, 0.0505478982798027]
myTransform.row2 = [0.271716917599619, 0.18817091587, -0.943801686325473]
myTransform.row3 = [-0.853851675367803, -0.405282653962527, -0.326624075764]
myTransform.row4 = [0.254576, 0.451339, 0.0373162]
Camera2.transform = myTransform
Camera3 = freecamera()
myTransform = Camera3.transform
myTransform.row1 = [-0.74130608605, 0.671167067552163, -0.000232849008482583]
myTransform.row2 = [0.286824737087837, 0.316485108662, -0.904195081931823]
myTransform.row3 = [-0.606792268391517, -0.670352104068177, -0.427119654888]
myTransform.row4 = [-0.169543, 0.753825, 0.0722654]
Camera3.transform = myTransform
Camera4 = freecamera()
myTransform = Camera4.transform
myTransform.row1 = [-0.923441055146, 0.383009276380708, 0.0236751320412161]
myTransform.row2 = [0.133154528755292, 0.377678021014, -0.91631281990099]
myTransform.row3 = [-0.359897887105216, -0.843008426203009, -0.399762809936]
myTransform.row4 = [-0.563822, 0.729699, -0.0342284]
Camera4.transform = myTransform
Camera5 = freecamera()
myTransform = Camera5.transform
myTransform.row1 = [0.525824342342, 0.850497633694891, 0.0127489638792393]
myTransform.row2 = [0.300989461965109, -0.17202784705, -0.937982816273996]
myTransform.row3 = [-0.795558988879239, 0.497051501273996, -0.346447254708]
myTransform.row4 = [-0.0535178, -0.85111, -0.0541857]
Camera5.transform = myTransform
Camera7 = freecamera()
myTransform = Camera7.transform
myTransform.row1 = [0.103343164028, 0.993844386470724, 0.0399189920878644]
myTransform.row2 = [0.582432980529276, -0.0279322709220002, -0.812398677641049]
myTransform.row3 = [-0.806282837247865, 0.107205987341049, -0.58173435745]
myTransform.row4 = [0.0764454, -0.254899, 0.408282]
Camera7.transform = myTransform
Camera8 = freecamera()
myTransform = Camera8.transform
myTransform.row1 = [0.00111216005600002, 0.999005160074132, 0.0445808618721977]
myTransform.row2 = [0.854213952293868, 0.02222945228, -0.519446219697305]
myTransform.row3 = [-0.519920462000198, 0.0386593015533045, -0.85333942344]
myTransform.row4 = [-0.352342, -0.224872, 0.73212]
Camera8.transform = myTransform
Camera9 = freecamera()
myTransform = Camera9.transform
myTransform.row1 = [-0.200889882868, 0.970968367563747, -0.12986024854386]
myTransform.row2 = [-0.0644564208597473, -0.145376882578, -0.987274496693728]
myTransform.row3 = [-0.97749098449614, -0.189963131166272, 0.091789890654]
myTransform.row4 = [0.170197, 0.118426, -0.569607]
Camera9.transform = myTransform
Camera10 = freecamera()
myTransform = Camera10.transform
myTransform.row1 = [-0.058238125416, 0.997169687196283, -0.0475492974175039]
myTransform.row2 = [-0.0431093717242826, -0.0500977325700001, -0.997813509259562]
myTransform.row3 = [-0.997371496894496, -0.0560609679564378, 0.04590495675]
myTransform.row4 = [0.171159, -0.193182, -0.599323]
Camera10.transform = myTransform
Bundle.out file
11 623
6.7483561379e+002 -9.2909176483e-002 -7.1452728557e-004
9.9137151689e-001 -1.1993357464e-001 5.2900408063e-002
9.8726398766e-002 9.4864141643e-001 3.0055375763e-001
-8.6230004559e-002 -2.9273776783e-001 9.5229668990e-001
2.0104152055e-001 1.0006100568e+000 -5.3611738918e-001
6.7665492312e+002 -8.2788477805e-002 -1.5495688596e-002
9.8107969862e-001 6.4153247757e-002 -1.8266632352e-001
-1.7897830105e-002 9.6951650853e-001 2.4437145365e-001
1.9277523862e-001 -2.3647854127e-001 9.5232116793e-001
-4.8824954210e-001 8.3078302635e-001 -3.2765590435e-001
6.9594685805e+002 1.6075856973e-002 -6.1758264844e-003
8.2472475473e-001 3.1837521750e-001 -4.6740378670e-001
-1.5041243203e-001 9.2019899304e-001 3.6139993567e-001
5.4516527699e-001 -2.2775213301e-001 8.0679847960e-001
-1.2503867966e+000 1.1229738934e+000 -1.2577283457e+000
6.8393864087e+002 -8.7992965606e-003 7.6634490784e-003
5.5296099706e-001 4.4786856695e-001 -7.0260079879e-001
-2.7796451946e-001 8.9409871323e-001 3.5117405502e-001
7.8547429089e-001 1.1125377983e-003 6.1889328693e-001
-1.9044153757e+000 1.2161906946e+000 -1.3300891219e+000
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
6.8447535552e+002 -6.4255104279e-002 -8.1737040619e-003
9.1291554876e-001 -2.8328096147e-001 2.9383175067e-001
1.9935278303e-001 9.3767914176e-001 2.8463361539e-001
-3.5615118802e-001 -2.0127027595e-001 9.1249471631e-001
1.1266368937e+000 1.0317378118e+000 -4.2850805998e-001
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
6.8222578863e+002 -9.8611796890e-002 -6.3262065149e-003
9.9916713027e-001 -9.2759089965e-003 -3.9736674407e-002
9.6681255192e-003 9.9990631492e-001 9.6896193167e-003
3.9643071647e-002 -1.0065728282e-002 9.9916320388e-001
-1.1945327215e-001 -1.6066502534e-002 -4.0000672191e-001
6.8556802108e+002 -8.5817439178e-002 -1.9029040798e-002
9.9071140322e-001 5.7798475103e-002 -1.2308635910e-001
-1.0008727000e-001 9.2269230682e-001 -3.7231901016e-001
9.2051365578e-002 3.8118006666e-001 9.1990668161e-001
-2.0419030672e-001 -1.1309509170e+000 -4.9814597685e-001
6.8340278196e+002 -1.0275312214e-001 -1.8491494029e-002
9.2296024395e-001 2.7121191052e-002 -3.8393857463e-001
2.2870682478e-001 7.6366983583e-001 6.0373965427e-001
3.0957644677e-001 -6.4503707090e-001 6.9863409648e-001
-1.1194835731e+000 1.8679491761e+000 -8.9878098319e-001
6.7312430685e+002 -1.1220939686e-001 2.6068044663e-003
9.7094006942e-001 1.7350651792e-002 -2.3869297533e-001
1.3528752594e-001 7.8292714291e-001 6.0722514295e-001
1.9741496122e-001 -6.2187140454e-001 7.5782800773e-001
-3.4669200651e-001 2.1011837748e+000 -6.5191141116e-001
-4.3599130124e-001 -3.4406986976e-001 -2.6258599380e+000
which I bring into transformation matrices as
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0] myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0] myRotation.row1 = [9.9137151689e-001, -1.1993357464e-001, 5.2900408063e-002] myRotation.row2 = [9.8726398766e-002, 9.4864141643e-001, 3.0055375763e-001] myRotation.row3 = [-8.6230004559e-002, -2.9273776783e-001, 9.5229668990e-001] myTranslation.row4 = [2.0104152055e-001, 1.0006100568e+000, -5.3611738918e-001] Camera0.transform = myTranslation * myRotation
Camera1 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [9.8107969862e-001, 6.4153247757e-002, -1.8266632352e-001]
myRotation.row2 = [-1.7897830105e-002, 9.6951650853e-001, 2.4437145365e-001]
myRotation.row3 = [1.9277523862e-001, -2.3647854127e-001, 9.5232116793e-001]
myTranslation.row4 = [-4.8824954210e-001, 8.3078302635e-001, -3.2765590435e-001]
Camera1.transform = myTranslation * myRotation
Camera2 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [8.2472475473e-001, 3.1837521750e-001, -4.6740378670e-001]
myRotation.row2 = [-1.5041243203e-001, 9.2019899304e-001, 3.6139993567e-001]
myRotation.row3 = [5.4516527699e-001, -2.2775213301e-001, 8.0679847960e-001]
myTranslation.row4 = [-1.2503867966e+000, 1.1229738934e+000, -1.2577283457e+000]
Camera2.transform = myTranslation * myRotation
Camera3 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [5.5296099706e-001, 4.4786856695e-001, -7.0260079879e-001]
myRotation.row2 = [-2.7796451946e-001, 8.9409871323e-001, 3.5117405502e-001]
myRotation.row3 = [7.8547429089e-001, 1.1125377983e-003, 6.1889328693e-001]
myTranslation.row4 = [-1.9044153757e+000, 1.2161906946e+000, -1.3300891219e+000]
Camera3.transform = myTranslation * myRotation
Camera4 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [0, 0, 0]
myRotation.row2 = [0, 0, 0]
myRotation.row3 = [0, 0, 0]
myTranslation.row4 = [0, 0, 0]
Camera4.transform = myTranslation * myRotation
Camera5 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [9.1291554876e-001, -2.8328096147e-001, 2.9383175067e-001]
myRotation.row2 = [1.9935278303e-001, 9.3767914176e-001, 2.8463361539e-001]
myRotation.row3 = [-3.5615118802e-001, -2.0127027595e-001, 9.1249471631e-001]
myTranslation.row4 = [1.1266368937e+000, 1.0317378118e+000, -4.2850805998e-001]
Camera5.transform = myTranslation * myRotation
Camera6 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [0, 0, 0]
myRotation.row2 = [0, 0, 0]
myRotation.row3 = [0, 0, 0]
myTranslation.row4 = [0, 0, 0]
Camera6.transform = myTranslation * myRotation
Camera7 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [9.9916713027e-001, -9.2759089965e-003, -3.9736674407e-002]
myRotation.row2 = [9.6681255192e-003, 9.9990631492e-001, 9.6896193167e-003]
myRotation.row3 = [3.9643071647e-002, -1.0065728282e-002, 9.9916320388e-001]
myTranslation.row4 = [-1.1945327215e-001, -1.6066502534e-002, -4.0000672191e-001]
Camera7.transform = myTranslation * myRotation
Camera8 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [9.9071140322e-001, 5.7798475103e-002, -1.2308635910e-001]
myRotation.row2 = [-1.0008727000e-001, 9.2269230682e-001, -3.7231901016e-001]
myRotation.row3 = [9.2051365578e-002, 3.8118006666e-001, 9.1990668161e-001]
myTranslation.row4 = [-2.0419030672e-001, -1.1309509170e+000, -4.9814597685e-001]
Camera8.transform = myTranslation * myRotation
Camera9 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [9.2296024395e-001, 2.7121191052e-002, -3.8393857463e-001]
myRotation.row2 = [2.2870682478e-001, 7.6366983583e-001, 6.0373965427e-001]
myRotation.row3 = [3.0957644677e-001, -6.4503707090e-001, 6.9863409648e-001]
myTranslation.row4 = [-1.1194835731e+000, 1.8679491761e+000, -8.9878098319e-001]
Camera9.transform = myTranslation * myRotation
Camera10 = freecamera ()
myTranslation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation = matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]
myRotation.row1 = [9.7094006942e-001, 1.7350651792e-002, -2.3869297533e-001]
myRotation.row2 = [1.3528752594e-001, 7.8292714291e-001, 6.0722514295e-001]
myRotation.row3 = [1.9741496122e-001, -6.2187140454e-001, 7.5782800773e-001]
myTranslation.row4 = [-3.4669200651e-001, 2.1011837748e+000, -6.5191141116e-001]
Camera10.transform = myTranslation * myRotation
Eugene
May 29, 2010 -
Hi Josh,
I am quite sure that the focal length in Photosynth is simply a ratio of the sensor width (longest side) to the focal length. So, just mulitply this by the value that Photosynth is giving you and it should be back to a mm value.
I would like to try your code and see what can be done. I also posted on the CodePlex site to respond to your question.
Good job,
Eugen
Nate Lawrence
May 30, 2010 -
Hey there, Josh,
I can’t wait to put this to work, once the focal length issue is worked out.
A few questions for you:
Does your script always assume a three digit number sequence at the end of the photo filenames? Perhaps this is an assumption of Bundle2PMVS? Could four digit number sequences be supported?
Also, does it assume that all images are the same resolution? It would seem so. What would your advice be for collections of photos from multiple cameras?
Does it handle mixed collections (comprised of both landscape and portrait shots) well?
Looking forward to hearing more.
Nate
Josh
May 30, 2010 -
@Nate,
Yes I am looking forward to ironing out the bugs and getting it as helpful as possible.
In response to your questions:
Setting the count size is not a problem, I can update that next time I have a look at the code.
At the moment it is locked to a single resolution, and I’m not sure what the best way to handle multiple resolutions is. Any suggestions?
Also, AFAIK PhotoSynth considers horizontal and vertical image formats as the same, but with a camera rotation. I could be wrong. Do you have a mixed PhotoSynth to check the AspectRatio field against?
Kind Regards,
Josh
Nate Lawrence
May 30, 2010 -
@Josh,
Thanks for the quick answers.
As to multiple resolutions, I know that the Deep Zoom Collection (.dzc) file for the synth specifies all of the resolutions of the images in a synth. I’m not certain if this corresponds to the image order listed in the synth’s .log file or not. Presumably its xml can be parsed out into a plain text file with only the appropriate information necessary.
On the other hand, perhaps there is some simple utility to run on the local images to generate a list of filenames and corresponding resolutions.
Ideally we could have a tool that exports the highest resolution from each .dzi pyramid in a synth and pieces the tiles back together correctly (taking tile overlap into account, etc.) which could simultaneously generate the resolution for each. This would mean that in cases where people have licensed their photos with Creative Commons licenses, we could try out dense reconstructions of their synths by downloading the original sized images for their synth (having undergone some quality loss during compression to tiles and subsequent recompositing), just as Christoph’s exporter currently lets us download their pointclouds and camera metadata.
As to a synth with mixed aspect ratios, give this a whirl:
http://photosynth.net/view.aspx?cid=afaf7f93-898b-4a54-8594-4d0dd588e3e2
Also, although this is not especially synthy, here’s a synth from scraped web photos that has plenty of different aspect ratios and resolutions, even among the few photos which actually did synth together:
http://photosynth.net/view.aspx?cid=2713eacf-1992-4928-8a1c-7f9c18092dbc
Cheers,
Nate
Josh
May 30, 2010 -
@Nate,
The Use Case I am working to at the moment is just uploading your own images, getting PhotoSynth to reconstruct, then pulling it all back down and processing with PMVS2 for a dense point-cloud. I agree it would be very useful to be able to pull the original images off the PhotoSynth as well, to allow processing of others’ Synths, but I have no experience with Deep Zoom Collections. Is there a simple way to get the file information and highest-res images that you know of?
… Okay I’ve done some checking and the .dzc for the Synths are not hard to get to. For the full process the full images would have to be stitched back together at the original resolution, but this doesn’t look like a major technical challenged, although beyond the scope of my javascript output generator.
You are correct also about the landscape and portrait being listed with different aspect ratios; I checked that camera parameters of the first Synth.
I am excited about the possibility of further processing arbitrary PhotoSynth (I’ll probably implement it in C# as a Windows application like SynthExport), but for the moment I need to get my simplest case working.
I’ll have a go at mapping the rotation and transformation output by PhotoSynth, and compare it to what I have from working Bundler sets.
Kind Regards,
Josh
John Ellis
Jun 24, 2010 -
any luck on getting the camera orientation working? I am totally stoked to see people working on this. I think that this is one of the coolest use of spatial software. right now I am working on a pipline to pull the point clouds into GIS software to place the points and eventual model in the actual world. I have been having some success with it too. 🙂 This is useful for things like uploading to googleearth and the like.
Regards,
John
Josh
Jun 24, 2010 -
@John:
Do you have any documentation of your results? It would fun to see what you’ve got!
As I say to Reza, It’s in the pipeline, and should be done by the fortnight, when I have some time to sort it out. I’m really looking forward to finally getting some good 3D results out of multi-camera scans I took way back in 2007!
By the way, you might be interested in the Washington U. PhotoCity iPhone App, which is effectively PhotoSynth on legs, crowd-sourcing full environment scans.
Reza
Jun 24, 2010 -
Hi Josh,
What are the current problems you are encountering in getting the list.txt and bundle.out from a synth and feeding it to pmvs2? Is there a link to the javascript? I am frustrated with bundler not including all my images for pmvs2, whereas photosynth does a better job at matching all of them.
Josh
Jun 24, 2010 -
Hi Reza,
Yes, this is exactly why I am pursuing this myself; the results are definitely better using PhotoSynth. As a bonus it is also much easier to install and use!
I’ve taken down the javascript until it is working, but actually will probably just modify the SynthExport code to export into Bundler output format. The relationship is actually defined in the Bundler documentation, though the coordinate systems are different. I don’t think there’s much more than shifting axis to get it working, but I’m pretty busy at the moment.
I aim to commit some time to get it working sometime in a couple of weeks.
Nate Lawrence
Aug 19, 2010 -
Have you had any time to devote to further work on this lately, Josh?
John Ellis
Aug 20, 2010 -
The data I have right now I actually can’t publish it because it is archaeological data but I am thinking of trying something that can be seen on google earth, maybe get it into a KML / DAE file.
John Ellis
Aug 20, 2010 -
Also I have been experimenting with Bundler, but have not had any reasonable success. It runs but the output is fractured and non-sensical but in photosynth it runs and looks great. – I guess Microsoft actually improved on something :-D. any guesses as to why my point clouds in bundler a looking odd?
John Ellis
Aug 20, 2010 -
I found this discussion it is kind of interesting…
http://photosynth.net/discussion.aspx?cat=6b63cb81-8b57-4d5d-a978-41d5509bf59a&dis=07bd9f28-a398-407c-b127-84d654eba536
Dense point cloud straight from Photosynth.
Nate Lawrence
Aug 20, 2010 -
As it turns out, I’m the Nathanael from that very discussion thread. =)
No clue on the odd Bundler output. I’ve seen Nathan Craig say that he gets better results from Bundler than Photosynth, so it’s interesting to me to see different people come away with different experiences.
My only real interaction with Bundler had been via PhotoCity, but due to some technical difficulties on their end, I’ve never really been able to really build out my reconstructions there like I’ve wanted. A photo set of mine that Photosynth got right in one try has had some serious troubles in PhotoCity, with some symmetrical architecture being merged, rather than separated. In all fairness, that has some to do with how I shot that particular set which didn’t follow my normal rule of capturing plenty of surrounding detail which differentiated the two identical structures. They both had unique marble textures and different foliage in front of them, but it was still too similar for Bundler to separate.
I’d love to run Bundler locally, but the setup instructions seem to be written in Linux terminology that I have trouble making sense of, as I’m pretty dependent on a GUI. I downloaded the Windows binary and all of the dependencies back in 2008, but finally gave up trying to make it work. I wish that someone would simply make a Windows installer.
That said, reports from people who have gotten Bundler to run under Cygwin in Windows claim that Bundler performs better in Linux anyway and there is also the complication that finding a viewer for Bundler’s output seems to be a bit elusive for a less technical person like me, both of which are a little discouraging.
As to Microsoft improving the Bundler codebase to turn it into the synther… I would certainly hope that there was SOME reason for the wait between the 2006 Photosynth CTP and its 2008 public release. 😉
Here are a few videos that feature a few of the Photosynth team members discussing those very improvements.
http://on10.net/blogs/nic/ShutterSpeed-EP04-The-Photosynth-Team/
http://channel9.msdn.com/posts/Dan/Blaise-Aguera-y-Arcas-The-technology-behind-Photosynth/
http://channel9.msdn.com/posts/Dan/Drew-Steedly-and-Joshua-Podolak-on-Photosynth/
More to be watched here: http://photosynth.ning.com/video/
John Ellis
Aug 21, 2010 -
I think I figured out my problem with bundler, my camera wasn’t registered correctly with bundler, so I had to add a line to the extract_focal.pl file with my camera and the CCD size. effectively it was estimating my focal length as double what it was so my PLY files were all exploding out in one direction.
My set up to get bundler running was I installed Active Perl, then Cygwin and I did a full install(installed every package listed, I know there is a lot there that I don’t need but I don’t like trying to solve dependency issues) then followed the instructions here ( http://www.personal.psu.edu/nmc15/blogs/anthspace/2010/01/bundler-installation-procedures.html )
and it worked after I did that, I did however have to change the lines of code in extract_focal.pl RunBunler.sh
extract_focal.pl
$BIN_PATH = “/home/John/Bundler/bin”;
RunBundler.sh
BASE_PATH=/home/John/Bundler
Yes, it took me a while to get bundler working on my machine too this configuration worked for me though, I am hoping to get output that I can pull into CMVS to get a dense point cloud. 😀
John
John Ellis
Aug 21, 2010 -
Well I was able to get CMVS and PMVS2 to work on my dataset 😀 I now have a 3d reconstruction of the points too 🙂 exciting times in Computer Vision.
Josh
Aug 22, 2010 -
@Nate; MeshLab is a good viewer for PLYs, and also come with the Poisson reconstruction built-in. This is needed for creating meshes once you’ve run your Bundler output through CMVS and PMVS to generate dense point-clouds.
@Everyone: Bundler, CMVS, and PMVS2 are now all out in Windows binaries. Also, Astre Henri of the Visual Experiments blog has been doing some amazing work. Not only has he made a GPU feature matcher for bundler which significantly speeds up matching, he’s just come out with a PhotoSynth Toolkit that processes PhotoSynths into PMVS2. Well done, Astre!
John Ellis
Aug 24, 2010 -
Great work I tried it with his examples but am having a hard time with my own. Is there some necessity in the software for 100% synthy? I get errors of “Value type is 6 not 1” not sure what this means though.
Josh
Aug 24, 2010 -
@John;
Yes, it looks like it. I’ve also found that if Synths are not 100% synthy it gives that error. Should not be hard to trim images though, and I suppose that help ensure the images placed in “distort” are correct.
John Ellis
Aug 25, 2010 -
@Josh
Yes, I am looking at the output later today of CMVS of the same image set that I ran earlier. I am seeing if we can sort of do a manual CMVS somehow. it is basically just grouping photos together into bitesize chunks so that they can be run through PMVS2 on a 32 bit or memory limited machine. it also speeds up the process by a whole lot. What are your thoughts?
Josh
Aug 25, 2010 -
@John
Sounds like a good plan. I was going to mention that the vis.dat file speeds up PMVS2 too, but manually setting up image groups will effectively be the same thing.
Tim
Sep 2, 2010 -
Josh, can you point me to the GPU feature matcher for bundler that you mention has been developed by Astre Henri? Is it available for download?
Josh
Sep 2, 2010 -
@Tim,
I was expecting Astre to release his code by now, but it looks like he’s been working on getting the PhotoSynthToolkit released. I emailed him directly to get hold of this; give it a go, and maybe suggest he makes it available! 🙂
Tim
Sep 3, 2010 -
Thanks Josh. I contacted Henri and he has said he will release it in a few days. Great news!
Arjun Jain
Jan 30, 2015 -
Hey Guys,
I am the same guy who posted yesterday on on the thread: https://synthexport.codeplex.com/discussions/204015
Does anyone have the code or equations to project 3D points from the scene using different cameras?
I would be very grateful.
Thanks,
Arjun