Ever stop to wonder how you’re able to immediately link something you see with something you do? Turns out, we do many things without thinking about it because our brains are hardwired to react, and scientists may have found the pathway in the brain that makes that connection.

Take, for example, a simple game of basketball. When several players jump for the ball at the same time, each one must track visual information about the ball’s whereabouts as well as his own hand, while ignoring the other players' hands, the cheering fans, and other distractions -- all within seconds. How do we do it?

new study suggests it’s all thanks to our own dedicated information superhighway -- our brain’s specialized mechanism for spatial self-awareness that combines visual cues (seeing) with body motion (reaching).

We take information gathered by our eyes and process it in our brains all the time. But standard visual processing is prone to distractions, which is why it can be so hard for us to pay attention to one thing while filtering out others. According to Alexandra Reichenbach from University College London , our brains have separate hardwired systems that visually track our own bodies, even when we’re not paying attention. In fact, this network triggers reactions even before the conscious brain has time to process them.

Reichenbach and colleagues call it the dedicated ‘visuomotor binding’ mechanism, and they recruited 52 healthy adults to test this newly discovered mechanism. In all of the experiments, participants used robotic arms to control cursors on a 2-D computer monitor, where the motion of the cursor was directly linked to their hand movement. The goal was to guide each cursor (circles) to a corresponding target (squares) at the top of the screen – while their eyes were fixed on a mark (+) in the middle of the screen.

In the first experiment, they had to control two separate cursors with their left and right hands. Occasionally the cursor or target on one side would jump around, forcing the recruit to take corrective action. Each of these two types of jumps was signaled ahead of time with a flash of light on one side -- but the light wouldn’t always correspond to the side that’s about to change.

As expected, people reacted faster to target jumps when their attention was drawn to the ‘correct’ side by the light cue. Surprisingly, reactions to cursor jumps were fast regardless of cueing. This suggests that a separate mechanism independent of attention is responsible for tracking our own movements. (Remember, the cursors are our hand movements.)

“We react very quickly to changes relating to objects directly under our own control, even when we are not paying attention to them,” Reichenbach explains in a press release . “This provides strong evidence for a dedicated neural pathway linking motor control to visual information, independently of the standard visual systems that are dependent on attention.”

In other experiments, the brightness would change or dummy targets and cursors would pop up while the recruits were moving their cursors. Reactions to cursor jumps slowed down when there were four distractors, showing how the system has a higher tolerance for distractions, though it can still be affected.

It’s not clear why exactly we evolved a separate specialized mechanism. The need to react rapidly to different visual cues about ourselves and the environment may have been enough to demand a specialized pathway, Reichenbach speculates

This mechanism could explain why some schizophrenia patients suffer from delusion of control or why some people with prosthetics don’t feel that their devices are extensions of their bodies. “If someone does not automatically link corresponding visual cues with body motion, then they might have the feeling that they are not controlling their movements,” Reichenbach explains . “If the observed movement of the fingers is not exactly what you would expect, then it will not feel like you are in direct control.”

The work was published in Current Biology this week.

Images: UCL News (top) & visual display from A. Reichenbach et al., 2014 Elsevier Inc. (bottom)