309 SHARES Facebook Twitter Linkedin Reddit

Australian startup Immersive Robotics are poised to deliver what they claim is a truly universal wireless solution for PC VR headsets that delivers tether-free virtual reality with minimal compromises in quality and extremely low latency.

I’ve always found it fascinating to observe how the advent a new technology can accelerate the development of another. The push for rapid advances in smartphone specifications for example accelerated the development of mobile, high resolution displays and low-cost IMU components without which today’s first generation consumer VR headsets could simply not have existed. Likewise, now that consumer VR is finally here, the demand for a solution to those ever more anachronistic, presence-sapping cables is driving innovation and rapid advancement in wireless video systems.

We’ve seen an explosion of stories centering around companies looking to take the (until now) slowly evolving sphere of wireless video broadcasting and give it a good shot in the arm. Most recently we’ve seen HTC partner with TPCast to deliver a wireless add-on solution for their SteamVR powered Vive VR system. But prior to that we’d already heard how Valve was investing a “significant amount” in wireless video streaming for VR by way of Nitero, a specialist in the field with Quark VR and Serious Simulations on the scene still earlier than that. However, when it comes to pushing the boundaries of cutting edge technology, you can never have too many people racing to the finish line.

Immersive Robotics (IMR) are an Australian startup who have developed a wireless VR streaming and inline compression system designed from the very start to be used with PC VR headsets, offering a claimed sub 2-3ms latency, with minimal compromises to image quality and works over existing WiFi standards you’re very likely to have in your home right now. IMR call their system the Mach-2K and from what they’ve we’ve seen so far, it shows some considerable promise. In truth, IMR’s project is far from new as the founders have been developing their technology since 2015, with working proof of concept running first on an early OSVR headset before securing a government grant to fund further development.

IMR was co-founded by Tim Lucas and Dr Daniel Fitzgerald. Lucas has a background in unmanned vehicle design having worked on multiple “prominent” UAV designs but has also worked with VR and LiDAR powered Photogrammetry, having built what he describes as “the first Virtual Reality simulation of a 3D scanned environment from an aircraft”. Lucas’ co-founder Fitzgerald hails from aerospace avionics engineering with a PhD focusing on the then emerging unmanned drone industry. Fitzgerald has built auto-piloting software for said drones, an occupation which let him practice his talent for algorithm software development.

With the virtual reality industry now growing rapidly, the duo have set about designing a system built around proprietary software algorithms that delivers imagery to VR headsets wirelessly. “Basically from an early point in modern VR history, my business partner Dr Daniel Fitzgerald and I decided to tackle the problem of making a HMD wireless,” Fitzgerald tells us, “Our original area of expertise was in designing high-end drones and we initially envisioned it as an interface for that area.” The team quickly realised that with the advent of consumer level cost, room-scale VR, there were some significant opportunities to capitalise. “Soon after looking into it, we realized that logically pretty soon everyone using tethered HMD’s would probably just want to get rid of the wires anyway and that the potential in this growing market was significant,” Lucas tells us, “We designed a video compression algorithm from the ground up that could compress data down to acceptable rates for current wireless technology but at the same time eliminating the flaws of current compression technology that make it unsuitable for VR such as high added latency.”

“What we ended up with was a compression and decompression algorithm running on individual boards, which is able to plug into HTC Vive compress it’s data down by around 95% with less than 1ms additional latency. Most of all there is no visible degradation to what the user normally sees with the cables.”

That system is called the Mach-2K and comprises a battery powered receiving box, small enough to be worn on a belt by the player. The unit is then attached to the USB, audio and HDMI, and a transmission device attached to the PC which beams native resolution 2160 x 1200 images @90Hz to the target VR headset, currently an HTC Vive. IMR have developed hand-crafted algorithms capable of achieving up to 95% compression on those images while adding under 2-3ms of motion-to-photon latency, all delivered over a vanilla WiFi system.

As if that weren’t enough, the two devices, each working in tandem to compress and then decompress imagery at source and destination respectively, were originally conceived to handle 4k per-eye resolutions up to 120Hz, ready for the next generation of high-spec VR devices. “At the moment we have actually scaled it back for HTC Vive support,” says Lucas, “it will support 4K per eye which we believe to be a near future requirement,” so there’s room here for IMR’s technology to evolve alongside advances in VR headsets.

Mach-2K Specs:

Current resolution fully supported 2160 x 1200

Current frame-rate fully supported 90Hz

Planned resolution in the near future 4K per eye

Planned frame-rate in the near future 120Hz

Main CPU: FPGA

I/O: HDMI, USB 2.0, 12volts out.

Eye tracking input

Supply Power: 5 Volts DC

Current Frequencies: 802.11ac Wi-Fi 5Ghz

Future Supported Frequencies: Up to 60Ghz WiGig

On-board software: B.A.I.T “Biologically Augmented Image Transmission” Algorithm

OEM and SDK options available, allowing third parties to create application specific modules for the algorithm.

User select-able compression schemes

Lucas continues “What we ended up with was a compression and decompression algorithm running on individual boards, which is able to plug into HTC Vive compress its data down by around 95% with less than 1ms additional latency.”

Skeptical? So was I. So I asked IMR for some example images which demonstrated the before and after image quality of the Mach-2k system. The images below represent the IMR’s development progress as they’ve tuned and iterated upon that compression algorithm. Each image grid compares fidelity against an original, raw image after passing through the company’s older V2 algorithm and their current V3 iteration. Click on each to load the full size image.

Lossy (not a negative term, merely indicating data is discarded) video compression aims to shrink data sizes by discarding data used to describe a scene. As the same image still needs to be described in each frame only using less data, naturally their are compromises which must be made. A telltale sign of a compressed scene is loss of subtle colour fidelity, seen most glaringly as banding or posterisation on smooth colour gradients. You can see IMR’s older V2 algorithm struggling to the gradients as accurately as the raw image but their current method improves significantly, with minimal extra banding introduced.

If that level of image quality is indeed representative of the real-time, 90Hz experience, I can very easily see many users unable to distinguish between wired and wireless versions of the experience, except perhaps in especially challenging VR scenes.

Let me be clear here however; we have not yet seen this system in action for ourselves, so assuming that compression quality is indeed representative, we still are unable to judge added latency – the area of performance which will make or break the system of course.

Ultimately, IMR see their technology used as a way to provide universal wireless solutions for all VR headsets. “Our algorithm is designed to be as agnostic as possible with wireless equipment,” says Lucas, “we have demonstrated it to some of the world’s leading WiFi and VR experts and the general consensus is that our latency is the best anyone has seen. Allowing various options for integration which obviously would come with their own overheads.”