Artificial Neural Networks and Virtual Reality

A vision came to me through the 802.11 in the form of a dream-catcher.

I had a collection of panospheres I scraped, with permission, from good John and got inspired to create.



I spun up some Linux machines and began to make something. I'll let the pictures do more talking.

#!/bin/bash # cut the strip and turn it into a cube-map i=`ls | grep jpg` convert " $i " -crop 256 x256 out_% 02 d.jpg adjoin -m V out_04.jpg out_01.jpg out_05.jpg test.jpg adjoin -m H -g center out_00.jpg test.jpg test 1.jpg adjoin -m H -g center test 1.jpg out_02.jpg out_03.jpg " $i " mogrify -rotate 90 " $i " rm { test , test 1}.jpg rm out_*

Convolutional neural networks are mathematically modelled after biology and give the machine a deep understanding of the context of information, most often images. I trained neural-style with a variety of artists throughout history such as Picasso, Chirico, Monet, Dali, Klimt, stained glass scenes of nativity, video games, and other experiments.

Frames are saved between iterations, allowing a viewpoint into the mind of the program.

The cross is a portal into a world from which is rendered by unfolding the cross into a skybox.



After chopping it up into cubemaps, I made a simple FireBoxRoom to render the skybox:

< html > < head > < title > Sublime </ title > </ head > < body > < FireBoxRoom > < Assets > < AssetImage id = "sky_down" src = "09.jpg" tex_clamp = "true" /> < AssetImage id = "sky_right" src = "06.jpg" tex_clamp = "true" /> < AssetImage id = "sky_front" src = "05.jpg" tex_clamp = "true" /> < AssetImage id = "sky_back" src = "07.jpg" tex_clamp = "true" /> < AssetImage id = "sky_up" src = "01.jpg" tex_clamp = "true" /> < AssetImage id = "sky_left" src = "04.jpg" tex_clamp = "true" /> </ Assets > < Room use_local_asset = "plane" visible = "false" pos = "0.000000 0.000000 0.000000" xdir = "-1.000000 0.000000 -0.000000" ydir = "0.000000 1.000000 0.000000" zdir = "0.000000 0.000000 -1.000000" col = "" skybox_down_id = "sky_down" skybox_right_id = "sky_right" skybox_front_id = "sky_front" skybox_back_id = "sky_back" skybox_up_id = "sky_up" skybox_left_id = "sky_left" default_sounds = "false" cursor_visible = "true" > </ FireBoxRoom > </ body > </ html >

There were some obvious defects:

One can see the outline of the skybox caused from by edges interfering when processing the cross:

The maximum output from a GTX 960 can only manage default 512 resolution before running out of CUDA memory:

/usr/ local /bin/luajit: /usr/ local /share/lua/ 5.1 /cudnn/SpatialConvolution.lua: 96 : cuda runtime error ( 2 ) : out of memory at /home/alu/repo/cutorch/lib/THC/THCStorage.cu: 44

I proceeded to create one final piece before ending the experiment by combining multiple different neural networks together: waifu2x, deepdream, and neural-style:

You can watch the video of the transformation here:

The final piece after many layers:



Part 2: Minerva Outdoor Gallery

What if instead of looking at art in a virtual gallery, you could go inside of the art and be in the painting?

This time, I chose to preprocess the cubemaps into equirectangulars first so that the edges can be blended seamlessly into one image. This type of format is best for video as well. Take equi in Janus with [p] or ctrl-f8.

It takes about 2 minutes to process a single frame with 400 iterations at 512 resolution output. HD video will have to wait until I can upgrade the setup.

View with cardboard:



http://alpha.vrchive.com/image/PN

http://alpha.vrchive.com/image/P5

http://alpha.vrchive.com/image/PO

http://alpha.vrchive.com/image/ot

http://alpha.vrchive.com/image/Pa







The beginning is near

