4.1 3D Photogrammetry

Launch the Photogrammetry notebook in Google Collab and follow the directions; this notebook can be connected to your Google Drive.

In recent years, faster and more powerful computers have made it feasible to do complex 3d model building by extracting points of overlap in multiple photographs, then extrapolating from the camera metadata embedded in those images the distance from the points to the camera’s image sensor.This information allows the reconstruction of where those points were in space relative to the camera. Thus astonishingly good 3d models can be created at rather low cost.

Laser scanning, on the other hand, involves shooting rays of light onto an object (or space) and counting the time it takes for the light to return to the scanner. Laser scanners are able therefore to take detailed micro-millimetre scans of an object’s surface and texture. For some archaeological purposes, laser scanning is to be preferred. For other purposes, 3d photogrammetry or ‘structure from motion’ (sfm) is entirely appropriate, and the level of resolution good enough

In this chapter, we’ll cover some of the basic principles of how sfm works while pointing you to more detailed discussions. We also provide links in the ‘further readings’ to some recent case studies using 3d photogrammetry in novel ways.

4.1.1 Basic principles

Photogrammetry, as the name implies, is the derivation of measurements from photographs. In our case, we are talking about triangulation. We identify from a series of photographs of an object a series of ‘control points’ (the same features across multiple photographs) and ‘rays’ (lines-of-sight). Then we work out where the rays intersect via triangulation. (Our eyes and brain do this naturally and we call this ‘depth perception’). Once we’ve done this enough times, we end up with a cloud of points which are the three-dimensional position in space relative to the camera that took the photographs. Various algorithms can then join-up the points into a meshwork, onto which we can project the image information. The quality of the result depends on the software and algorithms, the quality of the camera, background lighting, and the skill of the photographer.

Nowadays, visually appealling models can be generated by low-cost smart-phone apps and shared immediately with services such as Sketchfab (check out their Cultural Heritage & History category). For recording purposes or for 3d printing artefacts afterwards for study, higher-power cameras and software are generally used (Agisoft Photoscan is an often-used product in this regard). Open source software is quite powerful, but packages like VisualSFM (one of the best known) can be difficult to set up. (If you are familiar with Docker, Ryan Baumann has simplified some of the process. More images and more computational power does not always lead to better results, however (Baumann 2015).

In general, it takes practice to develop the necessary photographic and technological skill/intuition to get the best out of one’s photographs and one’s software. In the exercise below, we introduce you to a workflow using Regard3d, a graphical user interface for working with a number of algorithms at each step of the process. It is also worth noting that 3d models can be generated from high-quality drone or other video; a workflow for this (which also uses Regard3d) may be found here.

The general process runs like this:

  • image capture: take overlapping images; you want a high degree of overlap. Knowing the ‘interior and exterior’ orientation of the camera - its internal arrangements, including lens distortion, focal length and so on from the metadata bundled with the image, allows software to work out the position of the camera with regard to the points of overlap in the images.
  • image matching: tie points are matched and camera orientations are deduced
  • dense point cloud generation. The intersection of rays then allows us to work out the location of these points in space
  • secondary product generation
  • analysis / presentation

4.1.2 Further Readings

The following will challenge your sense of what is possible with 3d photogrammetry, and how/why archaeologists should think critically about this technology.

Eve, S. (2018). Losing our Senses, an Exploration of 3D Object Scanning. Open Archaeology, 4(1), pp. 114-122. Retrieved 7 Aug. 2018, from doi:10.1515/opar-2018-0007

Reilly, P. (2015). Additive Archaeology: An Alternative Framework for Recontextualising Archaeological Entities. Open Archaeology, 1(1), pp. -. Retrieved 7 Aug. 2018, from doi:10.1515/opar-2015-0013

Verdiani, G. (2015). Bringing Impossible Places to the Public: Three Ideas for Rupestrian Churches in Goreme, Kapadokya Utilizing a Digital Survey, 3D Printing, and Augmented Reality. Open Archaeology, 1(1), pp. -. Retrieved 7 Aug. 2018, from doi:10.1515/opar-2015-0007

4.1.3 exercises

While there are command-line applications (like VSFM) for photogrammetry, installing is not for the faint of heart (see eg. this for Ubuntu). The notebook linked at the start of this chapter uses Meshroom from AliceVision as a command-line interface to a Google-provided GPU, and is worth exploring. But if you would like to try something on your own machine, Roman Hiestand built a graphical user interface around a series of open-source modules that, when combined in a workflow, enables you to experiment with photogrammetry. With a bit of hacking, we can also make it work with photographs taken from smartphone or tablet.

Download and install the relevant version of Regard3d for your operating system.

  1. Try the Heistand’s tutorial using the images of the Sceaux Castle. This tutorial gives you a sense of the basic workflow for using Regard3d.

  2. Take your own photographs of an object. Try to capture it from every angle, making sure that there is a high amount of overlap from one photograph to another. Around 20 photographs can be enough to make a model, although more data is normally better. Copy these photos to your computer. A note on smartphone cameras: While many people now have powerful cameras in their pockets in the form of smartphones, these cameras are not in Regard3d’s database of cameras and sensor widths. If you’re using a smartphone camera, you will have to add this data to the metadata of the images, and then add the ‘camera’ to the database of cameras. If you’ve taken pictures with an actual digital camera, chances are that this information is already present in the Regard3d database. You’ll know if you need to add information if you add a picture set to Regard3d and it says ‘NA’ beside the picture.

Open Regard3d and start a new project. Add a photoset by selecting the directory where you saved the photos.

Click ok to use the images. If Regard3d doesn’t recognize your camera, check the Adding metadata to images section below.

Click on compute matches. Try with just the default values. If the system cannot compute matches, try again but this time slide the keypoint density sliders (two sliders) all the way to ‘ultra’. Using ‘ultra’ means we get as many data points as possible, which can be necessary given our source images (warning: this also is computationally very heavy and if your machine does not have enough memory the process can fail). This might take some time. When it is finished, proceed through the next steps as Regard3d presents them to you (the options in the bottom left panel of the program are context-specific. If you want to revisit a previous step and try different settings, select the results from that step in the inspector panel top left to redo).

The final procedure in model generation is to compute the surfaces. When you click on the ‘surface’ button (having just completed the ‘densification’ step), make sure to tick off the ‘texture’ radio button. When this step is complete, you can hit the ‘export’ button. The model will be in your project folder - .obj, .stl., and .png. To share the model on something like Sketchfab.com zip these three files into a single zip folder. On Sketchfab (or similar services), you would upload the zip folder. These services would then unzip the folder, and their 3d viewers know how to read and display your data.

  1. Cleaning up a model with Meshlab Building a 3d model takes skill, patience, and practice. No model ever appears ‘perfect’ on the first try. We can ‘fix’ a number of issues in a 3d model by opening it in a 3d editing programme. There are many such programmes out there, with various degrees of user-friendliness. One open-source package that is often used is Meshlab. It is very powerful, but not that intuitive or friendly. Warning It does not ‘undo’.

Once you have downloaded and installed Meshlab, double-click on the .obj file in your project folder. Meshlab will open and display your model. The exact tools you might wish to use to enhance or clean up your model depends very much on how your model turned out. At the very least, you’ll use the ‘vertice select’ tool (which allows you to draw a box over the offending part) and the ‘vertice delete’ tool.

4.1.3.1 Adding metadata to images

  1. Go to https://www.sno.phy.queensu.ca/~phil/exiftool/index.html and download the version of the Exiftool appropriate to your computer.
  • Windows users you need to fully extract the tool from the zipped download. THEN you need to rename the file to just exiftool.exe. When you extract it, the tool name is exiftool(-k).exe. Delete the (-k) in the file name.
  • Move the file exiftool.exe to the folder where your images are.
  • Mac users Unzip if you need to, double click on the dmg file, follow the prompts. You’re good to go.
  1. Navigate to where your images are store. Windows users, search your machine for command prompt. Mac users, search your machine for terminal. Run that program. This opens up a window where you can type commands into your computer. You use the cd command to ‘change directories’, followed by the exact location (path) of your images. On a PC it’ll probably look something like cd c:\users\yourname\documents\myimages. When you’re in the location, dir will show you everything in that director. Mac users, the command ls will list the directory contents. Make sure you’re in the right location, (and windows users, that exiftool.exe is in that directory).

  2. The following commands will add the required metadata to your images. Note that each command is saying, in effect, exiftool, change the following metadata field to this new setting for the following image. the *.jpeg means, every single jpeg in this folder. NB if your files end with .jpg, you’d use .jpg, right?

exiftool -FocalLength="3.97" *.jpeg

This sets the focal length of your image at 3.97 mm. You should search for your cellphone make online to see if you can find the actual measurement. If you can’t find it, 3.97 is probably close enough.

exiftool -Make="CameraMake" *.jpeg

You can use whatever value you want instead of CameraMake. E.g., myphone works.

exiftool -Model="CameraModel" *.jpeg

You can use whatever value you want, eg LG3.

If all goes according to plan, the computer will report the number of images modified. Exiftool also makes a copy of your images with the new file extension, .jpeg_original so that if something goes wrong, you can delete the new files and restore the old ones by changing their file names (eg, remove _original from the name).

  1. Regard3d looks for that metadata in order to do the calculations that generate the point cloud from which the model is created. It needs the focal length, and it needs the size of the image sensor to work. It reads the metadata on make and model and compares it against a database of cameras to get the size of the image sensor plate. Oddly enough, this information is not encoded in the image metadata, which is why we need the database. This database is just a text file that uses commas to delimit the fields of information. The pattern looks like this: make;model;width-in-mm. EG: Canon;Canon-sure-shot;6. So, we find that file, and we add that information at the end of it.

windows users This information will be at this location:

C:\Users\[User name]\AppData\Local\Regard3D

eg, on my PC:

C:\Users\ShawnGraham\AppData\Local\Regard3D

and is in the file “sensor_database.csv”.

*mac users Open your finder, and hit shift+command+g and go to

/Applications/Regard3D.app/Contents/Resources

  • Do not open sensor_database.csv with Excel; Excel will add hidden characters which cause trouble. Instead, you need a proper text editor to work with the file (notepad or wordpad are not useful here). One good option is Sublime Text. Download, install, and use it to open sensor_database.csv
  • Add whatever you put in for camera make and camera model (back in step 3) exactly - uppercase/lowercase matters. You can search to find the size of the image sensor for your cell phone camera. Use that info if you can find it; otherwise 6 mm is probably pretty close. The line you add to the database then will look something like this:

myphone;LG3;6

Save the file. Now you can open Regard3d, ‘add a picture set’, select these images, and Regard3d will have all of the necessary metadata with which to work.

References

Baumann, Ryan. 2015. “Qualitative Photogrammetry Comparisons Gallery.” /etc (blog). https://ryanfb.github.io/etc/2015/07/27/qualitative_photogrammetry_comparisons_gallery.html.