This model of the nativity scene is the product of many processes, including photography, mesh reconstruction, additional mesh processing, and lastly, exporting for the web using the Blend4Web Blender addon. I'm not going to cover the process in exhaustive detail, but I will summarize what I did to reconstruct the model you see here. Undoubtedly, there are many tutorials on the web explaining how to perform 3D model reconstruction based on photographs or video footage (Gleb Alexandrov does a good tutorial on this subject here: https://www.youtube.com/watch?v=GEAbXYDzUjU).
I took many photographs at many different angles, and then I imported them into a free program, VisualSFM 1, to compute possible pairs between images. This step, whilst being aided by the GPU computation power, was quite time consuming, and required patience. After that, I used VisualSFM to perform sparse and dense reconstruction which resulted in a point cloud.
The point cloud data was then exported in some format, and then imported into MeshLab 2 for further processing, such as resampling the point cloud to ensure a fair distribution with Poisson sampling, and then reconstructing the faces from the points. An image texture, of which pixels were based on each point's color, was also exported from MeshLab. After that, I simply imported the PLY model into Blender, cleaned up any un-necessary geometry and then applied the image texture to the imported object, and then used the Blend4Web 3 addon to bundle the textured model into a 3rd party viewer, as you see here: