I started regretting uploading my “Embedding 2D Desktops into VR” video, and the post describing it, pretty much right after I did it, because there was such an obvious thing to do, and I didn’t think of it.

In the old video, I pointed out that any desktop application can be run through the VNC client, therefore, clearly, the right thing to do would have been to run the same 3D application that’s already running in the Rift in desktop mode through VNC, for that extra Inception feel. So to correct that mistake, here is take two:

I’m using VR ProtoShop as example application here because it never gets much of the spotlight, and it’s another great example for the kind of interactive manipulations that are possible using 6-DOF input devices like the Razer Hydra (my go-to application for that, of course, is the Nanotech Construction Kit).

In the “main environment,” I’m using two 6-DOF controllers to move the protein model as a whole, and to select and drag individual components (alpha helices, beta strands or amorphous coil regions). Through VNC, I’m showing Vrui’s desktop user interface at the same time, which uses a mouse with a fairly standard virtual trackball to move the protein model as a whole, and 2D interactions on a 3D “drag box” to move protein parts.

Unlike using 6-DOF devices in the 3D world, dragging the drag box with a mouse is rather tedious. One can move the box in a plane by picking any face of the box, rotate around one of the main three axes by picking any of the twelve edges, or rotate freely around the box pivot by picking one of the box’s corners (although the virtual trackball supporting that is somewhat busted, so it ends up never being used).

But still, the 2D user interface works. Well enough, in fact, that researchers from Lawrence Berkeley National Laboratory and UC Berkeley used it to create hundreds (thousands?) of candidate protein structures for subsequent automatic optimization on parallel supercomputers for protein structure prediction competitions starting with CASP5 (2002).

And here’s the kicker: there is absolutely zero code in VR ProtoShop that depends on input devices. There are no code paths “if you have a mouse, do this… if you have a Hydra, do this… etc.” All of that stuff is handled completely transparently, at the toolkit level. That’s the beauty of developing on Vrui.