Photogrammetry is basically creating 3D models from a bunch of photos

Its surprisingly simple to use the software, which to make is not surprisingly very very complex :smile:

@Svenska and I have been playing around with different cameras settings and workflows to produce models of objects, rooms and walls. We will post our successes failures and tips here.
My end goal is to use Arial drones and produce a mobile app which shows custom 3D models or landscapes for outdoor sports etc (no stealing my idea :wink:). and I think
Sven currently trying to make models of objects to create contour following cad designs.

Cameras and lighting

After lighting objects from all sides and using green screens, pros are then using high end slr’s with expensive lenses and tweaking the results in photoshop.

We used natural or lamp lighting with a point n shoot, mobile phone, gopro and low end slr all with great results

High Tech stabilised selfy stick ;-P


So much software has popped up t̶h̶i̶s̶ ̶y̶e̶a̶r̶ in 2014.
Hyperlapse (for 3d reconstruction from video then stabalised and 4d tunnel made through scene)
LSD-SLAM (for realtime point cloud mapping with single camera, and open source)
Agisoft Photoscan (which is what we are playing with at the moment, 3D and 4D scene and object reconstruction)


Will post more soon but for now check out this quick render of the entry lounge using a gopro and 20ish photos. Click to view the 3D model on sketchfab.

Tips and tricks

Lots of photos and computer processing time!

For Agisoft Photoscan (AgiPhoto) We have had the best results with highly textured and contrasting objects. Paintings on walls provide a wealth of tie-points for scenes. I have had epic failures when rotating the camera to take photos as opposed to walking along and then taking another photo.

Stay tuned for what me make next.

Happy to teach collaborate and learn from other members interested in this. Come say hi.



Not following my above instructions (taken before I did any research a few days ago) I took 300 photos of a section of Kangaroo Point cliffs. The software hated it! Now all older and wiser, and trimming the bad photos down to just 69 I have ‘A’ result. Its not fantastic but its something from nothing :smile:


I have a matt black version of this symbol:

I’ve had very little success with most 3d scanning apps so far, do you think Agisoft’s app would fix this and should I do anything to make it clearer?

Nice one, what sort of settings did you use for the wall?

Hey @Svenska I manually paired photos to optimise relative positioning and trimmed the scene down in the dense point cloud. Originally in High mode but the resulting file was above 200MB and too big to upload for free. I ran the mesh in normal and cut down the points and removed A LOT of faces. I kept the texture file high 8x 4k photos. I still haven’t figured out how to get my textures looking right in Sketchfab yet. Looks significantly better in the software than it does online.

Can you share some of your efforts. The mouse looked great considering it was rendered on low quality.

How do you manually pair? I was looking for that but couldn’t figure it out!

I think most apps prefer flat things, my guess is so the relative perspective is more accurately calculated. Agisoft has its own markers. Not sure if others can be imported the documentation doesn’t mention any support.

As for your markers. They look sweet!

1 Like

What i meant was, “Choose 2 suitable photos and click align, rather than selecting all of them” Worked great to undo all those rubbish results I had from the top of the cliffs. Its a pain to weed out the badly aligned photos but i makes a huge difference. There also seems to be a manual marker/tie-point feature which i want to try next :smile:

next step is to place markers in my scenes and gather gps data. From what I understand its needed to align chunks

1 Like

Sorry to necro;
@nogthree was talking about photogrammetry on Tuesday, and I remembered the python photogrammetry toolkit. Easiest was to use it is to download ArcheOS

Paper here;

An example use here

Gui on GitHub