Human Remote

For our master’s thesis at Group T Engineering College (Leuven – Belgium), SoftKinetic asked Jiang Lin and I to create something with 3D cameras that hasn’t been done before. SoftKinetic is a company that offers 3D camera’s and middleware to process the camera data. The latter was of great importance to the success of this thesis, because it provided us with skeleton tracking. Without this, we would have to implement it ourselves, leaving no time for other goodness!

So, what did SoftKinetic ask us to do? Let me guide you through the thought process. So far, 3D cameras exist mostly in a living room somewhere very close to a TV screen. It acts as an interface device to play games, or to replace the controls for a movie. The camera exists in a very limited space, which is a shame! So, what else is there to do in a living room? It doesn’t take much to come up with home automation! Combining the two is what Human Remote is all about. It is a system that lets you interact with (home) appliances without touching a remote, without looking at a screen and without trying to snap your fingers or whistle in different ways. The main idea was not really the 3D interaction, but rather the ‘no screen’ aspect. If you don’t believe us when we say that visual feedback is a necessity, try to play a 3D game blindfolded. So that is where we came in!

Testing gestures

With that idea, we started in early July 2011 with preliminary research. What was available, what was possible, where did we go next? It took us a couple of months to compose a document with multiple proposals ranging from a simple projector to a laser projector. All of them had advantages and disadvantages, but in the end, one proposal was chosen. Our feedback device consisted of two stepper motors in a pan-tilt configuration, and a laser pointer attached to the upper motor. This provided a full 360° coverage of a room with one single device. To provide the necesary feedback to the user, the laser simply followed wherever your hand went. It actually pointed at the point, where you would point at (that’s a fun line to say).

Skeleton calculated by IISU middleware

In between this device and the computer was some more mubo-jumbo (PIC microcontroller; FT232RL using the D2XX mode – really, check it, it’s a lot beter than the VCP; pololu stepper motor drivers). On the computer we had software running that acquired data from IISU (that’s the middleware from SoftKinetic), and processed this data further. For those programmers out there: how the hell does that laser know where it should point at? Well, you might have guessed it: the software does need to know where the walls and objects are.

Jiang Lin controlling the system

Our user test installation

There is not that much more that we can release to the public yet, because of an NDA, but a scientific publication is planned. However, we think you can guess quite some parts already. As soon as we can, we will be releasing the video material that shows the actual operation. And even though filming laser dots is quite a challenge, it is still visible.

So, that’s what we had done before the exams. But less than a week before the defense, I decided I was bored, and wanted to add something extra. It’s always a big gamble to do this in advance, but we don’t fear a little uncertainty 🙂 So far, the system could interact with appliances, but only trough changing a variable. It didn’t take that much time to make a system that could actually toggle a 230V light by interacting with the software. The video I made isn’t edited at all, but you’ll get the picture!