

The Smart Home future is rapidly arriving - the personal home is getting connected. Lamps, Fridges, Toasters, Stereos; everything these days is available with an internet connection that allows controlling and networking of different devices from a central hub, usually a smartphone app.



I have been working on connecting lamps, LED strips and a bunch of sensors in my room for years now. I can come into my room and lights turn on automatically with different settings depending on weather and time. And when I leave home, they automatically turn off for me. However, I have always had one issue with my whole setup: A good old light switch is much faster and more natural to use than pulling out my phone, opening an app and toggling a switch.

This is why I have been exploring different ways that allow a more natural and intuitive way of interacting with connected appliances.





Introducing IOT-KINECT

The first step was an application written in Processing that interfaces with a Kinect depth sensor and any IOT hub capable of receiving commands via HTTP requests or websockets (in my case that is the brilliant and open source Home Assistant).

Vectors calculated by IOT-KINECT

So called “actuators” are defined on a virtual wall on the pane that is the Kinect’s Field Of View. The application takes two points from a detected user’s skeleton: The right elbow and wrist. It calculates a vector between the two points, and multiplies it by three - which creates an “extended arm” of sorts: A point in 3D space that follows where the user is pointing. Another vector is now calculated from that point towards the before-mentioned virtual wall, in a 90 degree angle. Whenever an external trigger comes in (in this case a hand closing gesture), the application checks for collisions of that very vector with any of the defined actuators on the virtual pane. If a collision is found, the application communicates with the smart home hub to toggle the light the user was pointing at during the trigger.







I control the Force.

This allowed me to turn lights in my room on and off just by gesturing at them. A very swift, natural and easy way to interface with smart appliances without even touching them.







Next up: Brainy Things

But a hand gesture really wasn’t cutting-edge enough for my taste. So why not incorporate brain waves?



Whenever you blink, a large (it’s actually tiny, but relatively large) electrical current is created by the muscles surrounding your eyes. Using a Brain Computer Interface, a device with varying amounts of electrodes that pick up electrical activity in your brain, that current is easily detected as a large spike.









All you really are is a little bio-electric activity in a bunch of strange white goop. Deal with it.

Usually, that spike is an anomaly that needs to be gotten rid of in order to better read actual brain activity. In my case though, that spike was all I needed.

After testing a number of different BCI technologies (including a massively overkill 3D-printed open source solution), I ended up with the NeuroSky Mindwave headset. It’s a really simple device with only a few electrodes, one of them on the forehead. There’s a small application called the ThinkGear Connector (they are really good at naming these) that creates a TCP server that broadcasts a bunch of information about brain activity, as well as a BLINK event every time the user blinks their eyes.

I modified IOT-KINECT to pick up these BLINK packets and use them as a trigger instead of the hand gesture. The result is functionality frequently interpreted as black magic.





The exhibition





As part of my university’s RUNDGANG exhibition, I installed a prototype installation of this.

I used the IOT app Blynk and two ESP8266 microcontrollers to build two WiFi-enabled lamps.









If there’s one thing to say about the places I work at, it’s that they are always breathtakingly tidy and organized.









Most important component being the bunch of cables.

Once the two lamps were fully functional, I modified IOT-KINECT again to communicate with the Blynk Cloud, allowing it to toggle on and off those two very lamps. I then built a little exhibition room using black cloth and a few tables, beautifully hid the Kinect’s cables, and voila:









Fantastic.

This project was by far the most interesting of my IoT - endeavours to this day. I learned a lot about brain waves, real space interaction and gesturing. However, it is far from being perfect. Right now, due to the Kinect’s relativeness, the user has to be roughly in the same spot every time they gesture at actuators. The bigger the actuators are defined in IOT-KINECT, the bigger the tolerance for that spot becomes - however, the likeliness of false triggers becomes larger too. By adding more Kinects to the space and defining actuators as actual, 3D objects in virtual space, the interaction could work regardless of the user’s physical location.

In addition, by applying machine learning to higher resolution brain interface data and individual training for a given user, the number of possible, hands-free commands could be increased substantially. So called “mental commands” are certain patterns in one’s brain that an algorithm was trained to recognize as a given command. When you think of the color red, a similar pattern can be read every time - and a model trained to recognize that pattern along with the ones for other colors could for example allow you to point at an LED strip in your home and tint it with your mind alone.

Either way, I think that in the future, after the IoT bubble has expanded to a point where every pen you hold sports 5GHz WiFi connectivity, the industry is likely to focus on interaction instead of volume. And that’s when things are gonna get real interesting.