This might be the last major breakthrough for this summer. The multi-camera texture projection is working now, although I don’t have it intelligently controlling which camera applies to which faces (that comes next) but this is worthy of a blog post even so. I’m actually a little surprised this was even possible given the sparsity of resources on the subject matter in PCL, but it totally exists now and that’s blog post worthy. This week’s post: how to get an almost undocumented feature like textureMeshwithMultipleCameras working!

One of the best piece’s of documentation was user Bálint Kriván’s “wild guess”

Resources & Visualization

Did I say resources were sparse? The community could offer no advice with this one, and there was only one file that eventually helped: kinfu’s code once again, and even that was not direct. It had nothing relevant to creating your own transforms (position/rotation/scale) for the cameras since it just pulled them from I/O directly (the code for PCL using devices, unlike my dependence on libfreenect2). Getting good camera data before texturing was the main issue then - and the problem became knowing if anything was working or not.

In all code, the first problem is to understand the problem. I need to remind myself of this more often than I should, but whenever I do, it’s always a straight forward path to finding a solution. So visualizing the problem became the core of this week’s issues.

Mine was like this, except all those points on the left were -20 units offscreen in both directions. And all five thousand occupied the same spot. And nothing appeared. But otherwise fine!

Blender

So I learned how to edit UV maps in blender this week to address that. It was a necessary step to understand how good or bad the UV maps were. End results were: terrible. My UV maps were way off, with none of the vertices even on the texture. I learned that this was the default for the camera not seeing anything when I downloaded an example textured model online and compared UV maps. This proved where my problem was: not in referencing the textures, but in putting vertices on the UV map, and what the camera could see.

Because compared to “nothing” then this isn’t a half bad angle to project from!

The Visualizer’s Camera…

Doing more visualizing was actually not my next idea. Instead, I remembered that PCL’s visualizer had cameras in it, so why not scrape those values and project from them instead of my (mostly made up) kinect values? And it almost worked! I got position, left rotation at all zeroes, gave appropriate width/height/fov. And yet, nothing.

It can- …it can actually work!? As intended??

…And A Camera Visualizer

No, it was visualization and one more thing that actually solved the problem. The visualization was finding that Kinfu actually had a camera visualizer function, it could take an array/vector of cameras and display all of them in PCL’s visualizer next to the real point clouds, as seen above. But none of that would have worked if I hadn’t also come across one final forum post about texture mapping, which is where user Bálint Kriván talked about setting the identity matrix first before giving the translation/position. (which he refers to as the “default” one)

Math saves lives

The Identity Matrix

What I learned right then and there was that my supposed rotation matrix (which hadn’t made much sense anyways as a 3x3 matrix for rotation, where it should have been 3 euler values or composed of 4 value quaternions) was in fact more likely to be the axis on which the translation occurred. Either way, my lack of any values (all zeroes) was in fact destroying the basis of my system, so a straight diagonal of ones through the middle instantly solved almost all my problems - that straight line of ones also being known as the identity matrix, or the default rotation or coordinate system for everything. The point clouds and the cameras were now all on screen.

I mean, we can just project all three textures separately, right?

Final Tweaks

And there was much rejoicing! Since then I have changed my fov/height/etc over to more official kinect v2 values, and though still not perfect, this might be the last big breakthrough. All that’s left for this first draft version is to control which faces are written to by which cameras, since procedurally blending texture edges is probably out of scope for the summer. Otherwise, the first draft is on schedule to be done before school starts (August 22nd), and at this rate the face merging and then bulk playback might actually be working in time. After that, it’s finding out what cool stuff this reflectionless free viewpoint video capture of the world can do.