235 SHARES Facebook Twitter Linkedin Reddit

IBM Watson, the artificial intelligence platform designed to understand natural language, today launched support for Star Trek: Bridge Crew (2017) across PSVR, Oculus Rift and HTC Vive.

Before the service launched today, lone players could control the ship’s other posts—Engineering, Tactical, Helm—by clicking a few boxes to issue orders. Now a sole captain (also with a mixed crew of humans and non-humans) can complete whole missions by issuing commands directly to the non-human-controlled characters using natural language.

Voice commands are enabled by IBM’s VR Speech Sandbox program, which is available on GitHub for developers to integrate speech controls into their own VR applications. The Sandbox, released in May, combines IBM’s Watson Unity SDK with two services, Watson Speech to Text and Watson Conversation.

We had a chance to go hands-on at E3 2017 with Star Trek: Bridge Crew embedded with the Watson-powered voice recognition, a feature that’s initiated during gameplay with a single button press. While talking directly to your digital crew does provide some of those iconic moments (“Engage!” and “Fire phasers!”), and most orders went through without a hitch, Watson still has trouble parsing some pretty basic things. For example, Watson doesn’t understand when you use the name of ships, so “scan the Polaris” just doesn’t register. Watson also didn’t pick up on a few things that would seem pretty easy at face value. Commands like “fire on the target”, “fire on the enemy,” and “come on, let’s warp already!” fell on deaf digital ears.

IBM says their VR speech controls aren’t “keyword driven exchanges,” but are built around recognition of natural language and the intent behind what’s being said. Watson also has the capacity to improve its understanding over time, so those “Lets get the hell out of here, you stupid robots!” may actually register one day.

This however doesn’t stop a pretty weird logical disconnect that occurs when talking to a bot-controlled NPC, and it stems from the fact that I was at first imbuing the NPCs with actual intelligence. When talking directly to them, I was instinctively relying on them naturally to help me do my job, to have eyes and ears and not only understand the intent of my speech, but also the intent of the mission. A human tactical officer would have seen that we were getting fired on, and I wouldn’t have had to issue the order to keep the Bird of Prey within phaser range. I wouldn’t have to even select the target because the Helmsman would have done it for me. IBM isn’t claiming to be able to do any of that with its cognitive computing platform, but the frustration of figuring out what Watson can and can’t do is a stark reality, especially when getting your tail-end blasted out of the final frontier.

In the end, Watson-supported voice commands may not be perfect—because when the Red Shirts are dropping like flies and consoles are exploding all over the place, the last thing you want to do is take the time to repeat an important order—but the fact that you can talk to an NPC in VR and get a pretty reliable response is amazing to say the least.