This blog is starting to turn into “big milestones only” - but luckily, I’m not out of big milestones yet. After some cleanup with last post’s renders, I found that a lack of points was the biggest factor keeping the models from looking better (aside from texturing, since there were so many holes) so I shifted focus to getting more devices running at higher framerates since each new device is a big framerate hit. This week I finally got high enough framerates to add another device! And above, you can see four kinect 2.0s at 12fps on one computer, compared to what before would have been 3fps before.

The power of depth video! (And macroblocks for Tumblr gif compression!)

Video.avi

So one of the biggest and most drastic shifts - still a work in progress actually (let’s hope the next post isn’t about regressing) was switching from capturing PCDs to capturing video frames to later reconstruct. That first part is done! The capture software now saves videos directly, meaning they can likely be compressed and shared much more easily too. This meant learning how to use OpenCV and hooking it into the library, but now at least I can say I have worked with the technology behind self driving cars. Video likely increased the framerate by about 50%, but it more than made up for that in its consistency - PCL will still regularly (more than half the time) have devices not send correct PCDs, but video seems to always work.

Yeah, it’s kinda like that.

Multithreading

OpenMP was actually probably the biggest contributor - which thankfully is built into VS15? Integration was surprisingly simple on this one (and shame on me for not using it sooner) and performance boosts were the most significant. This likely doubled the framerate for capture on the computers I’ve been using, and made it clear that video would even be worth pursuing (I had tackled threading first). I’m actually somehow using it in more places than I expected too - so multithreading has been doing wonders all over this project. Distributing the processing across all the cores on modern processors helps as much as you’d think it would.

This much power should not be this affordable.

PCI-E Cards

A pair of $16 USB 3.0 cards then expanded the controller count of my current motherboard from two to four, giving me the ability to run all four devices at once. The memory and disk write speeds aren’t half bad (likely due to the output being video now) but CPU is now in full use across all cores. This machine uses an i7 5820K, which cpuboss claims is a 30% performance boost over the minspec cpu for VR (an i5 4590 or so) and by that math, we should still see a 9fps capture ability right now for this software on a minspec vr machine. Not perfect, but a good enough rate to still keep targeting minspec as a platform. You can also get the $13 cards instead if you like, but I decided I could always use more USB ports.

How I hope to always travel in the near future. Maybe fewer wires tho.

Conclusion

So what was once 9fps for 3 devices (woulda been 3fps for 4) is now running at 12fps for 4. We’re basically running at four times the speed we previously were - which is fantastic. I also snuck in “exporting kinect params” as a function in that time too, which we can use to project textures more cleanly on future reconstructions - hopefully. A lot of this is still just my best guess going in, on how this all looks like it should work, and then seeing if I’m right or not. So far so good at least?

tl;dr: no progress on rendering, TONS of progress on capturing.

And then?

Maybe microphone recording finally, though neither opencv nor libfreenect2 seem to support that, and proving that these videos are even worthwhile (ie. can be converted back into useful pcds from depth). Once I know this data can be used, I’m keen to see how much stuff we can capture with volumetric video. Oh, and testing that reprojection with the kinect params to see if our textures can be less misaligned too. Someday it will all work!