As I sit here tapping away at my keyboard, I'm flanked by a computer screen showing news feeds from the world of technology, another frantically tumbling away with Twitter updates, and another still that's telling me that yet another email has landed in my inbox. While I sometimes feel like I'm drowning in data, my woes are as nothing to those experienced by air traffic controllers, network administrators, operators in emergency response control rooms, and even busy stock traders. bRight from SRI International – the Californian research institute which originally developed the Siri virtual assistant – has been designed to make life a little easier for folks who need to make snap decisions in time critical situations, but are faced with an overwhelming amount of information flowing in all at once. In addition to offering task automation and data filtering, the system can predict the actions, behavior and needs of a user or group based on previous activity.

In 2007, Silicon Valley's SRI International formed Siri Inc. to commercialize a virtual personal assistant technology born out of the institute's DARPA-funded CALO (Cognitive Assistant that Learns and Organizes) artificial intelligence project. A free app for the iOS platform was subsequently launched as a public beta in early February 2010, and just a couple of months later, Apple acquired the company. Spin forward to October 2011, and a conversational search assistant called Siri was launched as a new feature for the iPhone 4S.

A little while later, Google premiered its own digital PA in Android 4.1 (Jelly Bean). In addition to providing Siri-like search and assistance using natural language, Google Now delivered information and suggestions based on actions or decisions that the user had previously taken. SRI's latest project, bRight, progresses beyond both systems as an answer to what's been dubbed cognitive overload, where the tidal wave of information that can flood in during emergency situations can prove to be just too much to deal with effectively and, perhaps more importantly, rapidly.

The research prototype bRight workstation uses face recognition, gaze-tracking, proximity and touch sensing to gather information about its user, and provide what's needed for rapid task completion

The research prototype uses face recognition (though more secure biometrics, such as iris scans, will likely be implemented in the future) and gaze monitoring systems, along with proximity, gesture and touch sensors, to build detailed user profiles. In a similar way that modern computers might make valuable performance gains by effectively taking a shortcut when certain conditions are met, bRight's powerful AI software uses this information to anticipate what might be needed so that only data that's relevant to the job in hand is presented to the user, necessary tools can be literally placed at a user's fingertips, and repetitive tasks can be fully or partly automated.

For example, at a fairly simple level, if a user highlights a word in a document, the system can guess which menu items might be needed next and present the user with likely choices. Or if someone's writing a specific kind of email, such as a staff newsletter or performance bulletin, bRight may be able to determine its recipients based on previous activity, and pre-populate the Send To field. It might also detect potential errors or breaches of standard protocol.

"If bRight recognizes a user's action to be of a certain class, then it could provide corrective action," explains Dr. Grit Denker of SRI's Computer Science Laboratory. "Say I am writing an email about new bRight ideas and I am sending it to a bunch of people. bRight could recognize that I usually first send this to an internal team, before sending it to outside folks. Thus, if I am about to send such an email without having first sent it to my team, bRight could notify me whether this is on purpose."

In a more time critical setting, such as a malicious attack on a computer network, bRight could apply aggressive contextual filtering to display information relevant to the immediate needs of the administrator, and leave non-urgent messages to be dealt with later. It could also predict what tools are likely to be needed in such situations, and use the tracking sensors to place them within easy reach.

Additionally, making clever use of shared information could help save on valuable reaction time. A network administrator in Washington, for instance, might detect a threat to the system and send an email to his colleague in California for prompt action, but forget to include vital information necessary to complete the task. Having monitored the administrator's actions prior to the email, bRight can access the Californian email and fill in all the important blanks.

"bRight combines semantic markup in the application layer with sensors at the observation layer (e.g., touch, gaze, gesture, etc.)," says Denker. "This combination provides higher precision for prediction, especially in an environment where you do not necessarily have days or months of training data. In order to be useful, it has to have high accuracy. This can only be achieved if the cognitive models we intend to build are tuned to the applications. We are currently working on developing a cognitive model of users in the cyber domain using our tools. We are very interested in finding partners who would work with us to instantiate bRight for domains that meet at least two of the following criteria: information overload, rapid decision making and execution, and the need for collaboration."

What a cyberwarfare bRight workstation of the future might look like

The bRight framework is being implemented across different platforms and in different devices, from a Lite version that can run on tablets, right up to the huge multitouch prototype shown in the gallery. The latter consists of a standard HD television, off-the-shelf webcams that point toward the user, an IR gaze-tracking system, and an angled, table-height touch panel interface called the bRight workstation. Underneath the touch display, there are a number of IR sensors for capturing multiple touches on the panel, for proximity detection and to register gestures. A desktop PC runs the software side of things.

"In the future, we foresee that many of the sensor systems (gaze, touch, proximity) will be available on smaller form factors as well, and some of them are already becoming available," Denker speculates. "Then it will be easier to transition the core AI algorithms to other platforms such as tablets or smartphones."

"Since this is an early stage prototype, I believe that government funding to apply in a domain like cybersecurity would be appropriate to bring it to a readiness level where one can think about transition," she adds. "With adequate funding, I would expect a TRL 5/6 within about 3 years. With many of these technologies, they do trickle down to consumers eventually. There are some aspects of bRight that I would consider candidates for commercial funding that might enable transition into consumer markets early on."

Gizmag asked whether bRight's core AI program would likely be cloud-based and accessed via satellite devices such as tablets or smartphones, or if the system would be self-contained and installed onsite as a whole unit.

"The application domains we are currently focusing on (cybersec, emergency response, Air Traffic Control, Infrastructure and Network Management), and the fact that we are mostly looking for partners to develop bRight in a certain domain, means that I would expect bRight to be installed onsite as a whole unit where we can collect data to analyze the domain and workflows," says Denker. "The advantage of this approach is also that we have access to the various sensor systems we would like to use in order to get as much information about the user as possible. Learning and improving and continuous adaptation to a user's needs will be central to bRight."

For a short glimpse of what bRight has to offer, have a look at the video below.

Source: bRight