Although it’s much beloved by hackers and artists, the Kinect hasn’t really caught on as a tool for photographers and filmmakers. (Or gamers, alas.) That’s sort of odd when you think of it, because at heart the Kinect is a camera–and more than that, a camera with some huge, transformative possibilities when it comes to the way films are made. We’re already using the Kinect to augment reality. Why not let it augment filmmaking too?

NextStage is a bit of software that begins to tap the Kinect’s possibilities as a filmmaking device. It essentially hacks a Kinect so that it can be used to easily insert 3D footage in real-time scenes, separate live action subjects from the background without a green screen, quickly rotoscope actors and objects, and more.

It’s not that filmmakers can’t already do these things, of course. But they’re time consuming. Take something we see in Hollywood movies all the time: a CGI character or background interacting with a real person or scene. In real-life, that takes laborious, expensive frame-by-frame post-processing to get right, but NextStage can make that possible for amateurs on a shoestring budget, just by tethering their Kinect to the camera. Even green screens–a staple of every small town television station–can be made better with the aid of a Kinect, by using NextStage as an edge-refining garbage matte).

NextStage isn’t necessarily going to revolutionize filmmaking in its own right. Watching the videos, you can seem some rough edges around the NextStage app’s effects. But for amateurs, trying to make effects-laden films on a budget, the $80 app seems like it has a lot of potential.

Ultimately, though, what’s interesting to me is how it highlights the possible futue of filmmaking: one in which our film cameras record not just light, but depth as well. We might not be there quite yet, but the day is coming in which Kinect-like technology could very well let us combine physical and virtual actors, essentially in-camera.