DKesserich: DKesserich: I hardly think that a three year old datapoint from a time when the majority of the software available for Oculus was written with the DK1 SDK is relevant to the average HMD owning user today, when positional tracking is an integral part of three of the most popular VR devices on the market. I also find it a little disturbing that, per your response in the 2.8 dev thread, you’ve been doing the majority of your testing on drastically out of date hardware (DK2), or hardware that is owned by a fraction of a fraction of a percent of the market (the Deepoon E2? Seriously?). I work in a VR company, and I can say, unequivocally, that as far as true VR content creation is concerned positional tracking is crucial. The lack of it was immediately apparent (right after the FOV, convergence, and framerate problems), especially as there doesn’t even appear to be any sort of neck model on your rotation, which non-positional devices like the GearVR and Daydream implement. And if the OpenHMD devs are working on Pimax and LG 360 VR support, they need to stop wasting their time. Nobody owns those, and nobody is developing software for them.

So, I have been working with VR since 2013 and consult quite a bit for bigger companies here and there, but never had some one disregard such a big portion of users.

Devices like the Pimax have really high sales numbers (both inside and outside of Asia) and i have seen them used as primary devices within companies in their workflow.

This is the same for devices like the GearVR, which are commonly used in companies working with 3D content and games, still while not having positional tracking.

The reason for testing a lot with the DK2 is that a lot of Blender users own one, and the device and implementation is really stable.

Most people we asked to test our branch over the last half year had one, so it was a logical choise to use that, though I also tested with the PSVR (Linux, Windows) and the Vive on Linux.

Again, i am not arguing that we don’t need positional tracking, I am arguing that it is not as crucial als some people make it sound.

Positional is in development for multiple devices (Rift, Vive, PSVR and unannounced devices) and will land sooner or later, being able to merge the current support for release will make it easier to add this to Blender in the pipeline and work on getting more things ready for VR.

FOV and Convergence fixes will be pushed today or tomorrow, including new lens calibration for the Vive.

sebastian_k: sebastian_k: I didn’t have time to test the OpenHMD build yet, but I do agree that VR in Blender without positional tracking is not so very useful. Sure, in our studio we do produce mobile VR apps with only rotation tracking as well, but that’s a different kind of use-case. For passive applications, such as entertainment, info or educational VR apps, positional tracking is not always necessary.

But for a productive, creative application such as Blender positional tracking is essential. Tilt Brush with only rotation? Nah…

We had an intern in our studio who was investigating openVR with Blender and the HTC Vive, using the addon that was mentioned in one of the posts above. The addon itself works totally fine. And he even managed to get the controllers working, kind of. So you could grab and move things with the controllers, walk around your models etc. Of course only in a very hackish kind of way. But still, openVR seems to be working fine.

Anyway, I hope that positional tracking for at least the Vive and Rift can be implemented soon.

So, the problem with frameworks like OpenVR is that they are still limited to mostly Windows, and currently Linux in a early beta. (no macOS in the pipeline it seems) It also disregards a huge portion of HMD owners (with PSVR being one of the best sold VR devices at the moment) and terrible support for other devices.

I get that for Tilt Brush type applications positional is required, but to have Blender ready for this kind of workflow, a lot has to be done within Blender itself to make it function properly (allow multiple active objects, proper VR scaling model, more UI adoptations to allow for position aware UI, better handling of 3D positioned cursors etc), which will take time on its own.

But in the meantime, there are a lot of use cases which can be done at this moment such as visual aid for 3D video renders (which was the original implementation case), architectural work and feedback in 3D (as reported by https://developer.blender.org/D2133#60494), content previewing for sit-down type games (still a large portion), height indication for 3D printing etc.

Also i still want to add that adding positional to some devices will be impossible due to the dependency on libusb, which we can not do at this point. So positional for the Rift will be out of the question for the near future in any case.

Using a Python addon has been tested in the past as well, but the performance was drastically lower, and tracking issues would appear on heavier scenes due to lack of correct threading in python etc.

In comparison, using the Addon with the Vive on one of the test systems (with a 650ti) i got a lousy 15/20 FPS on one of the Caminandes scenes (which was one of the targets for the initial implementation), with rotational tracking being very inaccurate (due to not having enough update frames) and jumping positional data, which was unusable.

Where on the same system with the native C implementation I got near 50FPS on the same Scene (which was close to regular viewport FPS) and tracking being accurate even with dropped frames.

Of course you could still use the addon if your system is high end and you choose to do so, but due to the lack of platform and device support, and bad performance, making it a part of core Blender is out of the question.