Fred Brooks (Photo by Marco Grob)

Since joining IBM in 1956, Fred Brooks has been part of many of the most breakthrough computer technologies — working on some of the early computers. From the first generation of electromechanical computers, to vacuum tubes and finally modern transistors. Meanwhile becoming one of the most influential computer designers of the 20th century. Brooks has received numerous awards, including the National Medal of Technology in 1985 as well as the Turing Award in 1999.

Brooks is known for his book The Mythical Man-Month, which popularized and coined the term ‘Brooks’s Law,’ which states that a late software project is actually delayed with the addition of manpower. Brooks, as project manager, changed the IBM System/360 Series from a 6-bit to an 8-bit byte, thus enabling the use of lower-case letters. “The change propagated everywhere.”

The pitfalls when designing for Virtual Reality

What are some of the challenges of designing for virtual reality?

FB: In 1965, Ivan Sutherland gave a phenomenal speech [in] which he proposed virtual reality (VR). The key concept is “don’t look at this thing as a screen. Look at this thing as a window through which one looks into a virtual world. The research challenge in virtual worlds is [making] the picture in the window look real, move real, sound real, and even feel real.” And so for the last 50 [years], we’ve been trying to do that.

In our laboratory at the University of North Carolina at Chapel Hill, we started making head-mounted displays (HMD) in the 1970’a. Working with the technology that we had at the time, we used small television displays mounted vertically over the eyes and mirrors at 45 degrees that reflected the image into the eyes.

Some of the hard problems are image generation at necessary speeds, tracking where the eyes are, and developing the graphic capabilities to display high resolution colored images. The graphics processing has to be at a minimum of 30 frames per second (FPS) (Now the technology can manage to do 72 to 120 FPS).

The hardest technical problem with immersive environments is latency (the delay between the receipt of a stimulus by a sensory nerve and the response to it, in this case the screen).

How long does it take from the time you move your head until the image on the screen changes? The early systems had [tremendous] latencies. I’ve seen them as bad as 250 milliseconds, some of the best early ones were 120 milliseconds; that’s still bad.

A friend of mine was telling me how fighter pilots (in simulators) notice at 55 milliseconds that something is wrong; however, at 50 milliseconds they don’t notice. Now this is true specifically for flight simulators where the image is distant,so that you don’t have swift motion in front of your eyes. The same person told me 20 years later, “We’ve discovered now that if we’re doing aerial refueling, where there’s another plane and a boom closeby, we have to have latencies down to 25 milliseconds”. Similarly, in VR, it turns out that we need to get latencies below 20 milliseconds, preferably below 12 milliseconds.

So this was a flight simulator for military applications?

FB: Yes, this friend was with Lockheed Martin in St. Louis. They had six big simulators installed. If I recall correctly, they used these six simulators to train fighter pilots for air-to-air combat — three-on-three..

British Airways Flight Simulator (Image courtesy of British Airways)

Now, flight simulators still create the best virtual reality experiences in the world. I had a chance to fly a Boeing 747 around London in one of the British Airways’s flight simulators — a $13-million machine in a three-story space, on a motion platform that threw the whole cabin around at necessary speeds. The best VR experience I ever had.

EMT’s and IED’s and VR:

Looking forward, what challenge can you offer to new developers and designers in the field of VR?

FB: An early milestone would be a system to train, four soldiers room-clearing (military application), or four EMT’s at the scene of an accident. They must be able to talk to each other, able to share real tools and patient dummies, , and virtually see the crowd around creating all the disturbance and difficulty that makes life hard on the working scene.

So we’ve got track eight eyes simultaneously, we’ve got to generate eight different images in real time, we’ve got to make it possible for them to see, hear, and feel each other, use the real tools, and see the virtual tools. We have to display the eight images separately to the eight eyes. We have to track hands and feet as well. So this is kind of an augmented reality problem, and in some ways, a mixed reality problem, and it is very challenging. The hardest part today is knowing where the eight eyes are at millisecond latencies and delivering the separate images..

What do you think will happen to augmented reality? Will virtual and augmented reality stay separate to each other?

FB: Well the real question is, What are the driving applications?

The difference between education and (military and first responder) training markets is that in training you are paying the trainee, and so the cost per hour of what you’re doing is important, because the big cost is the trainee salary — not the equipment or the applications. Education [already] has a cost per seat problem. And so military, industrial, that’s where I say it will take off.