Rapid space creation with new state of the art Computer Vision & SLAM techniques.

I would like to introduce you all to SLAM, and some of the rapid advances that have been made in the last 730 days in the field.SLAM = Simultaneous localization and mapping. Essentially, it's how self driving cars get around, robots and drones get around, and google maps works.The field of SLAM is very closely related to Computer Vision and Photogrammetry, which you're all becoming more well versed in now, creating things for Destinations. Recently I've taken a very hard interest in this, a tiny bit due to work (we use computer vision a bit in signage), and a tiny bit due to new things coming out by bleeding edge companies working towards 9axis VR movies.Unfortunately for most people it's time consuming and not exactly "easy" to make assets. Photogrammetry helps a little, but even with many of the commercial tools available can be time consuming, resource intensive, and not always work or look the way you want it to. Also, general 3d design isn't all that easy yet, even with tools like microsoft paint3d in the pipe.I'm going to talk about two tools today. First:The incredible, free - RTAB-Map. Rtab-Map is a FOSS (free and open source) multi-OS tool developed by introlab at The University of Sherbrooke here in beautiful Canada. It's designed to rapidly create visual maps of spaces, MOSTLY for robot vision. However, it's also an absolutely INCREDIBLE tool for creating EXTREMELY RAPID, accurate and visually pleasing spaces with consumer hardware, such as a basic webcam, a TOF scanner such as a Kinect for Xbox 360 or XbOne, or even the new Lenovo Phab 2, with google's project tango. it can even create a map from a PRERECORDED VIDEO, or a series of pictures (stereo or not), if the video/images is of high enough quality.It basically does a lot of the same things agisoft photoscan does, but for free, and in REALTIME... and uses some of the same back end OpenCV and SLAM processes to do so.It uses a combination of rapid photogrammetry, a fusion based approach of the depth sensor, and a self correcting "loop closure" technique from SLAM to extremely rapidly create a point cloud map of a space. I would suspect that Valve's office lobby demo probably used some form of SLAM. (if you didn't, it could have been so much easier, lol!)please check it out here: https://introlab.3it.usherbrooke.ca/mediawiki-introlab/index.php/RTAB-Map And here: https://introlab.github.io/rtabmap/ Personally, I've been testing it with an Xbox360 Kinect, and in less than 5 minutes, literally slowly waving it around, you can export a well textured and fairly decent looking roomscan (or outdoor scan!) as an obj and mtl file, with extremely accurate depth and scale mapping.With tools like agisoft photoscan and Meshlab, you can combine these base meshes with higher resolution textures if you want, and create destinations faster than you can say "Valve Time".Higher quality results can be done by tweaking the settings in the software, the locale and visualization technique (such as orb), or by using a better rgb-d scanner (like a intel r200) however, the settings are overwhelmingly numerous, and I'm just getting started with it. I hope to have a full tutorial soon, but for those of you with a Kinect or one of the scanners listed on rtab-map's page, here's all you have to do:install the drivers for your kinect. I recommend kinect 1.6 or 1.8 for kinect360.Install openNi2: http://structure.io/openni Install Rtab-Mapopen rtabmap, and let it create it's defaults.Press CTRL+S to save. Click "new" at the top to open a database.With your kinect connected, press the blue "play" button at the top.SLOWLY walk around your room as far as the cord will allow (or if you have a laptop, even better) in a figure 8 pattern. Make 2 loops.If you go too fast, the software will lose tracking and go "red". Stop immediately and try to point the kinect wherever it lost tracking, and slow down a little.For your first try, don't scan for too long or the export will take forever. Slowly walk back to your computer or have a buddy click stop for you.You may have noticed that it was creating all this image in real time. It's absolutely amazing.Optional: You either click "close database" at this step to save your work. Then open it again.Now, I find it's a good idea to click Tools: Post processing after a capture, feel free to use the defaults.Meshlab has more methods for doing this, but it's a good first step.Now, click file: "export 3d clouds". There's a couple of bugs in the software right now that Matt is working on (this guy! https://www.youtube.com/watch?v=Bh8WZsU4YC8 ) . So, a quick first textured result requires you to first click "dense point cloud" then click and UNCLICK MTL "cloud smoothing".Then choose organized point cloud.Make your settings look like this:And hit save. Enjoy your super fast room scan! I still recommend cleaning things up in meshlab or similar afterwards.Tada!Similarly, if you have a kinect or some other popular RGB-D sensor, I also suggest "ReconstructMe" for creating individual models of people and things. http://reconstructme.net/ This software uses an approach called "kinectfusion" plus "bundlefusion" to rapidly reproduce photogrammetric models. It can't do it from a monoscopic webcam like rtab-map can, but rtab-map is more tuned for capturing at distance then details up close.Enjoy!