So I’m holding off on reconstruction improvements for a bit. Microsoft’s new Fusion4D technique looks great, and the subject was proving fairly difficult. So instead of trying to refine what’s a big subject for Microsoft even, I’m going for texture reprojection to get a rough version of mesh videogrammetry going from start to finish (even with a few terrible artifacts). This will let me show a finished version of playback, even if the models have noise problems, and should give more insight into what the hardest issues will be and what is most worth spending our time on. And currently, mesh texturing is proving to be in many ways the hardest one yet.

This is it folks.

Diminishing Documentation

I previously spoke incredibly highly of PCL on multiple occasions for its extensive documentation. Most things are incredibly well described, with tutorials, examples, and APIs everywhere. The further I get into videogrammetry playback though, the more I find that PCL no longer covers everything! It was a big negative when their examples for robust alignment didn’t work, and then reconstruction often expected you to already understand most of it, but worst yet: texture mapping is all but undocumented on PCL. The API does list TextureMapping as a class and has brief information on all variables and functions, but there is nothing more than that. The library now shows its Robot/ROS biases, and that meant resorting to some more unorthodox documentation sources…

Speaking of robots - Thanks raph!

Secret Documentation

There are other, stranger sources that hold relevant information. Forum posts here and there, code from other parts of the project that implement the intended functionality. My example reference became kinfu, one of the built in portions of PCL that supports Kinect Fusion-like functionality for scanning an entire room. That, combined with old 2012 posts on the dev forum from back when the original developer created the Texture Mapping class in the first place, gave me enough information to get a few things working. Shout out here to raph too, for both making the entire texture mapping code himself and leaving a small trail of examples behind. This let me finally start following along and trying things for myself, and results began to come through.

That looks perfect, right?

mapTexture2MeshUV

There are two functions then that I have tried so far from the texture_mapping class. The first, mapTexture2MeshUV, takes a brute force approach of slapping a texture file onto a mesh by whatever means necessary. It grabs the points at the farthest ends of the mesh, (far left, far right, top, and bottom) and then stretches the texture from one side to the other of the mesh in both dimensions. It leads to a strangely distorted texture, but it does technically get the image on the mesh. This may have strong potential for reliefs and single device captures/meshes, but it ultimately falls short for multiple device setups from what I can tell. At the least, it would need a lot of work, which led to exploring the alternative.

ProTip: Don’t pretend to give four submeshes by gividing one mesh four times.

textureMeshwithMultipleCameras

The other function then is textureMeshwithMultipleCameras, and in a predictable twist, has been causing me even more trouble. It takes multiple cameras, their position, angle, focal lengths, screen sizes, and uses those to reverse project images onto specific submeshes of your mesh which you have to specify to it. It isn’t particularly straightforward, but it’s powerful enough to be the basis for kinfu, and that’s saying something. Kinfu however has the advantage of using PCL for I/O as well as reconstruction, giving it access to the kinect Camera objects directly if needed (and skipping the need to figure out those values). That helps it to be working right now, where my code is stuck. Until then, every mesh I make comes out in a shade of brown, with no texture in sight yet. You can believe me that getting something to appear anywhere on there though is my number one priority.

Texture_0 in all its glory. A Simpsons moment comes to mind…

Next Steps

So I’ll be trying to commit to this multi-camera texture mapping function. It’s got the best potential, it just has me up against a brick wall currently. That means posting on PCL’s community and continuing to experimenting with it on my own until something meaningful comes out. I would switch to PCL’s I/O system for capture, but I haven’t seen anything to indicate yet that it can handle multiple Kinect v2 devices already, and I prefer libfreenect2 handling that for me currently.

If I can get texture mapping working here though, even in some ugly form, that will be enough to move into the last (and easiest) step of a rough videogrammetry playback: mass rendering and playback in sequence. Aside from the strange, slightly obscure file format (obj + mtl) that everything is currently exporting in, doing these reconstructions and mappings for all frames is just setting up a loop, and once they load in Unity, we’ve got an animated sequence. So here’s hoping Part 2 isn’t too far away for Texture Mapping, because everything gets a lot cooler once this texture mapping starts to work.