Tim Tuttle intends to give Siri and Google Now a run for their money with MindMeld, an iPad app slated to launch in the App Store this fall, with an iPhone app soon to follow. While both existing services have quickly become essential features on new smartphones, MindMeld will take the idea of a digital personal assistant a step further by making it almost prescient.

Though it will launch in the App Store under the guise of a video conferencing tool, a capability which it provides natively, MindMeld is actually an information-driven application which listens in to your conversation and attempts to understand what's being said. Once it figures out what you're talking about, MindMeld will try to create a model of the conversation's context, and from that it will attempt to locate and display relevant information from many different sources. "We're listening to the last ten minutes to predict what you need in the next ten seconds," Tuttle told Ars. "We're trying to make it so you never have to explicitly search for something you've already talked about."

Tuttle and his team began working on MindMeld with the intention of creating technology which would be considered essential for meetings, phone calls, and any other type of collaborative spoken working environment. Unlike Apple's Siri and Android's Google Now, you don't actually have to address the MindMeld agent to get any results. Instead, it is constantly listening during a call so that it can hear everything and make sure it delivers the right information based on what you're talking about. It's like having a friendly robot eavesdropping on your conversation and looking things up for you.

Before it presents anything, the MindMeld friendly robot—more properly known as the "anticipatory computing engine"—will extract information from search engines, news articles, videos, the user's social networking profiles, and even locally stored documents. It will then attempt to correlate all of that data and rank it by its importance to the conversation. For instance, if someone mentions that Becky is coming to the Bay Area and she’d like to check out wine country, MindMeld will recognize those keywords and begin to display links to wine tours and a map of the Napa Valley in real time. The idea is for the app to present you what you're about to look for before you even start looking for it, hence "anticipatory computing."

MindMeld’s engine has been designed to do three things: it can decipher a multi-party conversation and pick out vital keywords from concurrent streams of dialogue in real time; it can do continuous, predictive modeling, which essentially means it listens to the conversation to understand what has already been said and what might be talked about next; and it can perform proactive information discovery based on what it's hearing so that it’s constantly finding and retrieving things for you.

Though the idea of a robot constantly listening in may raise up those "big brother is watching" flags, Tuttle explains that the application only does so when all parties have explicitly allowed it. MindMeld will only process a conversation if another user has the app installed on their iOS device—it will not listen in to telephone conversations or decipher the speech of someone who has not given it specific permission.

Tuttle does foresee a few issues still looming for the product's launch. Most notably, he wants to ensure the technology works flawlessly on the backend and that the team can scale it out as the user base starts to grow. "I expect we're going to be working very hard to make sure [the launch] goes smoothly," he said.

This isn’t an entirely new concept, and if MindMeld is a success, it could help shift the focus of apps like Siri and Google Now in a new direction. There’s also some speculation that this technology could be married with Google Goggles, since Google's Ventures branch has reportedly backed the company behind MindMeld with $2.4 million. Regardless, it seems to be only a matter of time until technology like this becomes mainstream. As Tuttle says, if your device understands everything that you see and hear, it can use that information to better help you.