What’s better than Street Fighter II? Motion capture street figher II. 9 months after actually making this demo it’s finally up on the tubes.

2 player Street Fighter II with no controllers: just two Kinects. (Also a bonus for Mortal Kombat fans)

A heap of fun to make - both the demo and the vid. A few of you might be interested in the technical stuff - below.

Features:

You move, Ryu moves. left/right/jump/duck work great.

High/med/low punch and kick (takes a little practise to differentiate between, say, med & low punch)

Special moves for Ryu/Ken and Guile - fireball (haduken!), dragon punch, spinning kick thing for - Ryu/Ken, sonic boom and backflip-kick for guile. Could easily add more, but haven’t yet… Some need practise, but I guess same goes for learning on the keyboard.

2 players!

The Tech:

Nothing too crazy here…

Two Kinect cameras (actually, a Primesense PSDK (or whatever it’s called) and a Kinect) connected to the same PC.

OpenNI framework with Primesense’s NITE library to perform the skeleton tracking

Some “carefully chosen heuristics” (read: hacks… see below) to detect moves

Events are sent to the game when moves are detected

Runs at full 30fps on a low-end i5.

Gameplay:

It feels great to play!

It’s quite responsive - the small lag isn’t really noticable - eg when you jump, your character jumps well before you hit the ground.

Some moves game take a little practise - it’s frustrating when they don’t work, and you look pretty stupid too - but hey, even the karate kid spent weeks waxing cars before he was any good. Sometimes it just won’t pick up what you’re doing, or think you’re doing something you’re not. Tuning would fix a lot of this but some of it can’t be avoided. See below.

Because it’s my code it’s really hard to tell what’s easy to do and what I’ve trained myself to do. But Chris picked up the basics in no time at all, which was awesome. Haduken works great, but Guile’s moves aren’t as intuitive because you need to hold a position for 2 secs - they become two-part moves with a pause in between.

Some moves are physically impossible (looking at you, Chun Li) so often the part was often choosing a gesture which a) matched the character and b) was detectable by the kinect. I also wondered about the best way to do left & right movement - detecting ‘lean’s instead of moving whole body might feel nicer but would have more false-positives.

Move detection

Lots of issues here…

First, the cameras (and the skeleton tracking library) aren’t perfect - pose estimation (especially with strange/occluded poses like you get here) ain’t easy. Microsoft’s Kinect algorithms are better - would be interesting to try these out with the new SDK.

Then, there’s the pose matching algo. Ideally we’d hire some lackeys to make a library of a few hundred thousand examples and put on our fancy AI hats an train our pose/gesture classifier - that’s how Microsoft made their skeleton tracking so good - but let’s leave that type of thing for the multinationals.

It turns out that the simplest solution works pretty well - good ol’ heuristics. Simple low-pass filters and hand-crafted rules. “When in standing state, if both hands are close to the body and then quickly move towards opponent then send haduken command”. More time nudging here and there would make it even better.

It’d be fun to implement a simple Hidden Markov Model and see if that can improve results without needing ridiculous resources. Perhaps some educated guesses on state transition there could be successful… but that’s pretty akin to optimising the heuristics we’re using in the first place.

Can you do it with one kinect?

Probably, yes. I didn’t, because:

my poor computer couldn’t handle 2 skeletons on the one image at 30fps (two separate processes with one skeleton each worked much better) the room wasn’t deep enough to fit two people in the one frame without them punching each others’ lights out I had 2 kinects, wanted to try them out!

These aren’t big obstacles. It seems that OpenNI doesn’t perform so well with two simultaneous skeletons, but I only did a single quick-and-dirty test of this - tweaking OpenNI might well fix that. Also, switching over to the Microsoft Kinect SDK (released since I made the demo) might solve this too…

Thoughts on Hardware:

Getting two kinects to play nicely with OpenNI was a bit of fun, but once the .xml rubbish was thrown out I was on the right track. Hopefully OpenNI will find a way to better enumerate the devices (I wonder if they have unique identifiers for each device yet?)

Once working, it all trundles along pretty well - my i5 processor takes quite a beating but it still runs at the full 30fps. In its current state it couldn’t handle running a modern game (say, SF4) on top of it all but I’m sure there’s plenty to be optimised.

Can I Play??

This is really raw software… Not really ready for public consumption.

No plans to release it yet - anything that is released probably won’t be very user-friendly or well tested. Not for the faint of heart.

But hey, if you’re keen then give me a bell. Or come over to my place and I’ll give you a demo.

Like this? See the kinect controlled robot arm which delivers you chocolate or the 5 classic arcade games remade with kinect (pacman to guitar hero)