To my surprise and delight, I recently found out that Valve has been releasing Linux versions of most of their SteamVR/OpenVR run-time/SDK for a while now (OpenVR just hit version 1.0.0, go get it while it’s fresh). This is great news: it will allow me to port Vrui and all Vrui applications to the Vive headset and its tracked controllers in one fell swoop.

But before diving into developing a Lighthouse tracking driver plug-in for Vrui’s input device abstraction layer, I decided to cobble together a small testing utility to get a feel for OpenVR’s internal driver interface, and for the Lighthouse tracking system’s overall tracking quality.

The neat thing about OpenVR’s driver architecture is that it uses a well-defined — and published! — internal interface to talk to different tracking hardware, and that I can use that interface to read tracking data directly from Valve’s Lighthouse driver code without having to mess with client/server architectures. As an additional benefit, this also gives me access to all tracking data, not just the parts that Valve’s own vrserver sees fit for application consumption.

In a nutshell, my testing utility directly loads Valve’s driver_lighthouse.so dynamic library, which contains the sensor fusion and tracking code, and then calls a single exported function to retrieve a driver object. That driver object in turn offers several callbacks, such as one receiving button events, and, most importantly, one receiving tracking data updates as the tracking code calculates them from raw device sensor measurements. And that’s great.

So, let’s have a close look at Lighthouse tracking, specifically its implementation in the Vive Pre development kit, which is the one I happen to have.

Update rate and Latency

Not surprisingly, there is a lot of wild speculation and misinformation about Lighthouse’s update rate floating around. One known fact is the operating principle of the laser base stations that are the foundation for Lighthouse’s drift-free positional tracking. A quick refresher: each Lighthouse base station contains two lasers (see Figure 2). One laser projects a horizontal line of light (in the base station’s coordinate system) that sweeps through the tracking volume in front of the base station from bottom to top (when looking at the base station’s front); the other laser projects a vertical line of light that sweeps through the tracking volume from left to right (also when looking at the base station’s front).

Figure 2: Slow-motion video showing a Lighthouse base station in action (source).

Both lasers rotate around their respective axes at 3,600 rpm, or sixty revolutions per second. As there can (currently) only be one laser sweeping the tracking volume at any time, the two lasers inside one base station, and the four lasers of two linked base stations (A and B), are interleaved: First base station A’s vertical laser sweeps left to right; then, half a revolution or 8.333ms later, A’s horizontal laser sweeps bottom to top; then, another 8.333ms later, A’s lasers turn off and B’s vertical laser sweeps left to right; then after another 8.333ms B’s horizontal laser sweeps bottom to top; and finally B’s lasers turn off, A’s turn on, and the entire dance repeats from the beginning. To allow two (or potentially more) base stations to synchronize with each other, and to give tracked devices a way to measure their relative angles to each base station, each base station also contains an LED array that flashes a wide-angle synchronization pulse at the beginning of each 8.333ms laser sweep period.

This means that any tracked object (headset or controller) inside the tracking volume is hit by a laser every 8.333ms, or at a rate of 120Hz (potential occlusion notwithstanding). This might lead one to assume that Lighthouse’s tracking update rate is 120Hz, and that the worst-case tracking latency is 8.333ms (if an application happened to query a tracking pose just before one laser sweep hits a tracked object, the application would receive data that’s stale by a tad less than 8.333ms).

But this assumption does not bear out. When polled in a tight loop, SteamVR’s vrserver tracking server spits out a new pose (position and orientation) for each controller at a rate of 250Hz, and a new pose for the headset at a rate of 225Hz (that’s a strange number, and Alan Yates himself could not confirm it, but it’s what I measured). Digging even deeper, by talking directly to the sensor fusion code as I explained above, the update rate is even higher: a new pose is calculated for the headset at a rate of 1,006Hz (again, weird but measured), and for each controller at a rate of 366Hz (also measured). As an aside, unlike with Oculus’ Rift DK2, poses are not batched in sets of two, but delivered independently, more or less 0.994ms and 2.732ms apart, respectively.

My explanation is that these are the rates at which raw sensor samples, i.e., linear accelerations and angular velocities from each tracked object’s built-in inertial measurement units, and timing deltas for laser hits on each photodiode, arrive at the host, either via USB (for the headset) or wirelessly (for the controllers). The sensor fusion code inside driver_lighthouse.so then integrates these raw samples into a best guess for the current position and orientation of each device, and hands them off to the driver host via the aforementioned callback function. Apparently, vrserver subsequently bundles those measurements and sends them to VR applications over its client/server interface at a reduced rate, probably not to overwhelm the interface or applications with too much — and mostly redundant — data.

This means that at OpenVR’s internal interface, worst-case latency for head tracking data is about 1ms, and worst-case latency for controller tracking data is about 2.7ms, assuming that wire(-less) transmission and pose calculation add negligible latency. At vrserver’s client/server interface, on the other hand, worst-case latency is 4.444ms and 4ms, respectively (assuming my 225Hz and 250Hz measurements are correct).

Residual Noise or Tracking Jitter

The next important spec of a positional tracking system is residual noise, or the amount by which the reported position of a tracked device jitters when the device is actually fixed in space. I measured noise by placing the headset (see Figure 3) and one controller on a chair in the middle of my tracked space, which has two Lighthouse base stations about 2.4m above the floor, and about 4m apart.

Figure 3: Measuring residual headset tracking noise.

As can be seen in Figure 3, residual noise with two base stations is isotropic, and has a range of about 0.3mm. With only a single base station, the noise distribution turns highly anisotropic, with 0.3mm laterally to the remaining base station, and 2.1mm in the distance direction. The noise magnitude and distribution for controllers is very similar. This anisotropy is to be expected, as Lighthouse-based pose estimation boils down to the same perspective-n-point reconstruction problem that’s encountered in camera-based tracking and has lateral-to-camera error that grows linearly with distance, and distance-to-camera error that grows quadratically with distance.

Inertial Dead Reckoning and Drift Correction

As can already be deduced from the high tracking update rates described above, Lighthouse does not simply update a tracked object’s position and orientation when it is hit by a laser sweep. Instead, the current tracking estimate is primarily advanced by integrating linear acceleration and angular velocity measurements from each device’s built-in inertial measurement unit (IMU) via dead reckoning, and is updated at the rate at which samples arrive from said IMUs. The Lighthouse base stations merely control the build-up of positional and orientational drift that is inherent in integrating noisy and biased measurements, as explained in this video. Figure 4 shows the precise effect of drift build-up and drift correction on headset tracking data, and Figure 5 shows the same for controller tracking data.

Figure 4: Inertial dead reckoning, drift build-up, and drift correction in headset tracking.

Figure 5: Inertial dead reckoning, drift build-up, and drift correction in controller tracking.

Partial Interleaved Drift Correction

Due to its design of using two lasers per base station that sweep the tracking volume alternatingly left-to-right and bottom-to-top, Lighthouse adds an extra wrinkle to drift correction (see Figure 5). Unlike camera-based tracking systems such as Oculus’ Constellation, which measure the camera-relative X and Y position of all tracked markers at the same point in time, Lighthouse only measures a tracked object’s photodiodes’ X positions during a left-to-right sweep, and their Y positions during a bottom-to-top sweep, 8.333ms offset from each other.

This has two major effects: For one, it adds complexity to the sensor fusion algorithm. Instead of constraining the pose estimate of a tracked object to the full pose computed by a single camera image taken at a single point (or very brief interval) of time, the sensor fusion code has to constrain the tracking solution in independent steps. Digging even deeper, it turns out that there isn’t even a partial (in X or Y) Lighthouse-derived pose at any time, as a device’s photodiodes are hit at different times as the laser sweeps through space. The device’s current (estimated) motion has to be taken into account to calculate a full pose estimate over at least one laser sweep period. Fortunately, widely-used sensor fusion algorithms such as the Kalman filter are in principle flexible enough to support such partial state updates.

A second effect is that the total amount of information provided by the Lighthouse system to the sensor fusion code is only half of what a camera-based system would provide at the same frame rate. Specifically, this means that, even though Lighthouse sweeps the tracking volume in intervals of 8.333ms or a rate of 120Hz, it only provides the same total amount of information as a camera-based system with capture frame rate of 60Hz, as the camera delivers X and Y positions of all tracked markers for each frame. Meaning, a dead-reckoning tracking system with Lighthouse drift correction running at 120Hz is not automatically twice as “good” as a dead-reckoning tracking system with camera-based drift correction running at 60Hz. To compare two such systems, one has to look in detail at actual tracking performance data (which I hope to do in a future post).

Precision and Accuracy

The final two parameters to investigate are precision, or how close multiple subsequent measurements of the same point in space are to each other, and accuracy, or how close a measurement of a point in space is to the actual position of that point in space. Both are important data points for 6-DOF tracking systems.

To measure them, I placed a 36″ long ruler onto the floor in the center of my tracked space, and measured the 3D position of each 1″ mark using a small probe tip I attached to one of the tracked controllers (the probe tip’s position in the controller’s local coordinate frame, which is essential for repeatable point measurements, was derived from a simple calibration procedure). I then compared the resulting set of 3D points to the “ideal” set of 3D points, generated by computing each mark’s theoretical position in some arbitrary coordinate system, by running a non-linear point set alignment algorithm (see Figure 6). The RMS distance between the two point sets, after alignment, was 1.9mm, an estimate of Lighthouse’s tracking accuracy.

Figure 6 shows some non-linear distortion in the measured point set, but overall accuracy is very good. To judge precision, I compared the measured point set against a second measurement of the same points, yielding a slightly smaller RMS distance of 1.5mm.

As a practical result, it is therefore possible to use a Lighthouse controller with an attached and calibrated probe tip as a large-area 3D digitizer, with an expected accuracy of about 2mm.