Always carefully read the documentation. I’ve corrected a couple blog posts this week due to my own mistake: confusing the unhelpful mapMultipleTexturesToMeshUV with the incredibly useful textureMeshwithMultipleCameras. Both take the same parameters (in order: a texture mesh and a vector of cameras) and output a multiple texture material. The difference being: one automatically finds which polygons should use which textures for you. Now we have textures from three separate cameras, projecting from their correct angles onto one shared mesh.

Missing references to the transparency texture caused bright white artifacts.



TextureMeshWithMultipleCameras

So this function actually works like a charm once you do read the documentation correctly. Unlike mapMultipleTexturesToMeshUV, you actually don’t create your own submeshes to represent visible surfaces per texture. Instead, you send one submesh for all surfaces, and the function finds which cameras can see which faces (as it should be!) creating the respective submeshes in the process. This explains too why mapMultipleTexturesToMeshUV had no documentation on the subject, since it’s handled here automatically. Now that it is implemented correctly, textures are being applied accurately to all sides.

Shadows everywhere! Not quite the intended look.

MTL Limitations

The export format is unable to store certain material information though. I did successfully get the occluded faces to render as transparent, but the visible materials are all still automatically using lighting currently - not a desired effect when there was already lighting when we videogrammetrically captured the scene. Blender has settings for emission and to reduce the intensity of specularity and diffusion, but those appear to not be accessible in the MTL file format. It’s a shame too since otherwise the format and lighting are able to export perfectly, and with those few tweaks, it looks great.

The magic of submeshes with transparent textures, revealed!

Next Steps

I want to jump into bulk/multiframe rendering/exports next, but the results here are so dependent on the mesh shape that I’m inclined to return to reconstruction for a bit. It may come down to whatever feels most important this week. School just started up again and I expect my schedule to shift one way or another. Hopefully though, this weekend I’ll be able to get an animated, textured, reconstructed series of meshes to play in VR in Unity. And for now, that would be enough - just maybe not quite ready to show to the public yet.

