The video was created by projection mapping generative visuals created in Quartz Composer, making use of a Kinect camera for depth sensing and combined with a DSLR for video capture to mask between layers. The event sequencing was arranged in Ableton Live to control the compositions running in VDMX. An Arduino board running TinkerKit was used to bridge the software and lighting control. Brian provides more insight with a flow chart, description and additional photos below.

“I used a kinect in conjunction with a slightly modified version of the 1024_architecture MAD_kinectMasker QC patch to create a mask layer in VDMX. Then, in VDMX, on the "background" layer group (which was actually the top-most layer in VDMX) I used an alpha mask effect and used the image from the kinect as the image mask, which cut out the "foreground" layer group (the images projected onto Steven) to make it look like that layer was on top of the other one. For the MIDI - OSC conversion within Ableton, I used LiveGrabber.”