A Brief Introduction to VirtualGL

VirtualGL is an open source toolkit that gives any Linux or Unix remote display software the ability to run OpenGL applications with full hardware acceleration. Some remote display software cannot be used with OpenGL applications at all. Other remote display software forces OpenGL applications to use a slow, software-only renderer, to the detriment of performance as well as compatibility. The traditional method of displaying OpenGL applications to an X server on a different machine (indirect rendering) supports hardware acceleration, but this approach requires that all of the OpenGL commands and 3D data be sent over the network to be rendered. That is not a tenable proposition unless the 3D data is relatively small and static, unless the network is very fast, and unless the OpenGL application is specifically tuned for a remote X-Windows environment.

With VirtualGL, the OpenGL commands and 3D data are instead redirected to a GPU in the application server, and only the rendered frames are sent over the network. VirtualGL thus virtualizes GPU hardware, allowing it to be co-located in the "cold room" with compute and storage resources. VirtualGL also allows GPUs to be shared among multiple users, and it provides "workstation-like" levels of performance on 100-megabit and faster networks. This makes it possible for large, noisy, hot 3D workstations to be replaced with laptops or even thinner clients. More importantly, however, VirtualGL eliminates the workstation and the network as barriers to data size. Users can now visualize huge amounts of data in real time without needing to copy any of the data over the network or sit in front of the machine that is rendering the data.

Normally, a Un*x OpenGL application sends all of its graphics rendering commands and data, both 2D and 3D, to an X-Windows server, which may be located across the network from the application server. VirtualGL employs a technique called "split rendering" to redirect the 3D commands and data from the OpenGL application to a GPU in the application server. VGL accomplishes this by pre-loading a dynamic shared object (DSO), the VirtualGL Faker, into the OpenGL application at run time. The VirtualGL Faker intercepts and modifies a handful of GLX, OpenGL, X11, and XCB function calls in order to divert OpenGL rendering from the 3D application's windows into corresponding pixel buffers ("Pbuffers"), which VGL creates in GPU memory on the application server. When the 3D application swaps the OpenGL drawing buffers or flushes the OpenGL command buffer to indicate that it has finished rendering a frame, VirtualGL reads back the rendered frame from the Pbuffer and transports it (which normally involves delivering the frame to the "2D X server" and compositing it into the 3D application's window.)

The beauty of this approach is its non-intrusiveness. VirtualGL monitors a few X11 commands and events to determine when windows have been resized, etc., but it does not interfere in any way with the delivery of X11 2D drawing commands to the X server. For the most part, VGL does not interfere with the delivery of OpenGL commands to the GPU, either. VGL merely forces the OpenGL commands to be delivered to a GPU that is attached to a different X server (the "3D X server") than the X server to which the 2D drawing commands are delivered (the "2D X server".) Once the OpenGL rendering has been redirected to a Pbuffer, everything (including esoteric OpenGL extensions, fragment/vertex shaders, etc.) should "just work." If an OpenGL application runs correctly when accessing a 3D server/workstation locally, then the same application should run correctly with VirtualGL when accessing the same machine remotely.