SceneVR has the concept of “The Grid”. The Grid is modelled on burning man, it is an endless flat expanse, that starts at the origin (0,0) and heads out to the North, East, West and South.

This tree signifies the origin (0,0,0) of the grid

Each position on the grid is represented by a coodinate eg, (0, 0) or (3,-2). The grid is made up of A-Frame scenes, which are html markup that gets converted into 3d shapes. A scene on the grid is exactly 32 metres by 32 metres in size, and 64 metres high. Currently all scenes on the grid sit on the same ground plane.

An overview of the grid using leaflet for mapping

As you move around the grid, SceneVR loads up the 9 nearest scenes and renders them for you. So that means you can see the scene you are in, and the scenes to the north, north-east, east, south-east, etc. Walk further to the east, and more scenes will load, and the scenes that you walked away from will unload. This is done for performance reasons, so that your computer doesn’t have to render tiny details of a scene that is hundreds of metres away from you.

Approximations

However, having things on the grid pop out of existence as you walk away from them is kind of jarring. In the real world, you can’t make out detail, but you can still see a tree from several hundred metres away. What SceneVR needed, was a way of rendering an approximation of a scene from far away, so that instead of things popping out of nothingness, you can see a distant scene, and then when you get close to it, the real scene loads and you can see fine detail.

In this screenshot, the teapot and the manatees are real 3d models

But when you walk back another 30 metres, the teapot gets replaced with a 2d representation

How we solve this problem for The Grid is that when you upload a scene to the grid, we take photos of the scene from four elevations, north, south, east and west. This is an orthogonal representation of the scene on a transparent background, which although only an approximation of how the scene looks, it does a pretty good job of showing you what the scene will look like when you get closer to it.

Instead of having to render a 4000 face manatee, your GPU only has to render a 512px wide picture of a manatee. This is similar to how games on low-end systems render two perpendicular planes to represent a tree at a distance, except instead of rendering a tree, we’re rendering far off scenes.

We’re still working on the exact implementation of this, and there is a bit more development to do around smoothly transitioning from the approximation to the real scene (for example, we should wait until the scene is fully loaded before we fade out the approximation to prevent things popping in and out, we could try and use fog to fade things out in the distance), but the experiments so far are looking good.

Onward to the metaverse!