From OpenStreetMap Wiki
Jump to: navigation, search

Go straight to PGM?

Can bundler use PGMs? If so, maybe one should convert directly to PGM when resizing?

for f in *.JPG; do convert "$f" -resize 2048x1536 "$f.pgm"; done

--Slashme 06:26, 7 February 2010 (UTC)

I suspect that would impair the operation of Bundler. In the early stages of running the Exif tags of the jpgs are inspected for focal length information, I would guess that Exif tags aren't preserved during the conversion. Also, I'm pretty sure the script assumes it is being run in a directory full of jpgs.

Having said that, the RunBundler script looks very easy to modify and the resizing of images could be incorporated into that.Ainsworth 09:53, 7 February 2010 (UTC)


Nochmaltobi, Why is it necessary to use Microsoft.VC90.OpenMP.manifest and vcomp90.dll? I run mogrify.exe without them. --vvoovv 20:21, 7 February 2010 (UTC)

Here (windows XP) I needed these two files as well. Maybe you do already have them in your PATH? Did you install ImageMagick with the installer previously? But of course, if they are actually not necessary we should change the text back. Nochmaltobi 21:32, 7 February 2010 (UTC)
I also run Windows XP. I've never installed ImageMagick from the installer. I used zip archive instead. The only mogrify.exe file is enough. So I revert the text back. --vvoovv 22:04, 7 February 2010 (UTC)

OK, so we get the position of each photograph in some 3-axis coordinate system of unknown scale. If these photos are geotagged, can't we determine the position, rotation and scale of the whole point cloud by looking at these known(+- ~5m) points? I read somewhere that there are import scripts for gpx into blender, so it must be also possible to export in gpx (or maybe just as a georeferenced rendering for the josm piclayer plugin, for a start). Grenzdebil 16:32, 10 February 2010 (UTC)

Quite possibly, I've thought a little about this already. I don't think it would take much to write a script to take the output from bundler and spit out a gpx file for JOSM though, Blender is probably an unnecessary complication to the process. If certain assumptions are made (photos were taken from approximately same level, not half from top of multi-storey car park) it would be a relatively simple process to geo-locate the data. I've already written one script that takes manually matched xy pairs and converts it to the format bundler requires and it's doesn't seem to much of a stretch to write one to locate data. I might have a go this weekend.Ainsworth 19:41, 10 February 2010 (UTC)

3D point clouds by Mapilllary

Mapillary blogged about some 3D point clouds they can generate from their vast collection user-contributed street photos. I'm not clear whether the term "Photogrammetry" applies to this. If so, the we should maybe add a section on this page about it. Or alternatively rename this page "Photogrammetry using blender" or something (since this page goes into quite a lot of detail about how User:Ainsworth did his experiment in 2010. -- Harry Wood (talk) 12:19, 3 December 2015 (UTC)

It's on the edge of photogrammetry and computer vision. The photogrammetrists will call it "playing around with nice point clouds" and point out the "absymal" precision, and the computer vision folks will use it for cool visualisations and not bother much about whether the point cloud is precise to 30cm or 3m. I am more of a cv guy (and I did university research in the field), and I would definately say it also belongs here on the photogrammetry page. --Gormo (talk) 20:11, 3 December 2015 (UTC)