Since my last post, I've shared my neural network inspired creations with others and the idea of creating a VR came about a week ago so I'd like to dedicate this post to how quickly we are able to materialize ideas into VR.

The theme of my art has been 'Made with Code' combining the power of the Linux command line with some creative hacking. For me, the art is all about process of which I use to achieve the end result. Glitch Art is one of my favorite styles and I highly suggest checking out this excellent short introduction done by PBS explaining the movement:

If you read my previous posts you'll know that my work flow typically involves a combination of FFmpeg and ImageMagick strung together with bash loops to process a large set of images and video at a time. It's smart to keep the ImageMagick documentation open for easy reference when figuring out how to do certain manipulations from the command line.

Databending is the act of corrupting files containing data with the intent to create Glitch Art. The results of which can be described as beautifully broken:

This image is constructed entirely by code.

I was inspired by one of my favorite artists, Jullian Oliver, whom creates awesome electronic artworks that combine creative hacking and information visualization.

In order to create a less deterministic method of sourcing content for artworks, I took to the air and took images extracted from public insecure WiFi traffic. This is why you should use https-Everywhere

I had a packet capture from a cool coffee shop I visited whilst visiting Colorado, that I kept as a memoir. I employed a method of extracting images from captured wireless traffic. The good parts are here:

#!/bin/bash # Shell script adopted from Julian Oliver PCAP= $1 airdecap-ng $PCAP DCAP= ${PCAP%.*} -dec.$( echo $PCAP | awk -F . '{print $NF}' ) DIR=$( echo $(basename $DCAP ) | cut -d '.' -f 1 ) mkdir $DIR && cd $DIR tcpflow -r ../ $DCAP mkdir data foremost -i * -o data

The data gets categorized in separate folders according to file formats [PNG/GIF/JPG/ZIP]. It's possible to even visualize the image traffic because of the way tcpflow organizes the data:

The glitchy pictures is in effect from the forensics utility foremost which reassembles the image data from incomplete TCP streams reflowed with tcpflow. If one were to use the more well known driftnet, the images tend to be cleanly captured off the network. One idea that I would like to implement on a later note is to assemble all the half broken images into a giant "stream" by continually adjoining them all into one long image file. I could then map this onto a rotating object within VR that can create the effect of a flowing river of WiFi traffic :)

Next, I utilized a ruby script a friend on IRC made that glitches images together. You can read more about the script here. The output of groups of images together create interesting collages from the inputs you give them. The output is saved as glitch.png. I wrote a nifty one liner that can make 100 collages at a time by shuffling and glitching 10 random images at a time:

#!/bin/bash set -e for i in {001..100} do ls -1 *.jpg | shuf -n 10 | xargs ./glitch.rb 25 cp glitch.png 1/out_$i.png done ## Can also be made into an alias as well alias kek="for i in {001..100}; do ls | grep jpg | sort -R | tail -n 10 | xargs ./glitch.rb 25 && cp glitch.png 1/out_$i.png; done"

The trick to making this script work is by making sure that the directory is clear of corrupt files that will cause the script to crash.

Boom, create a batch of 100 different collages and can then go through them, evolving the algorithm while picking the best ones. These were taken from a recent packet capture:

In order to randomize further, one could also pre-process the images all the same size by using something like mogrify -resize 512x512! *.png because the ruby script will take the largest image as the base -- which is super awesome as demonstrated in this example:

360 photosphere with glitches surrounding WiFi traffic sprinkled in.

My other inspiration came from the anniversary of 9/11, the day that changed the world. I call this piece "The Patriot Act" which is in reference to the PATRIOT act that passed 45 days after the attack on 9/11, enabling so much of the surveillance state to grow to where it is today.

I've taken my hobby interest in deep learning to the next level to see if I can make a creative AI that is able to make it's own original art inspired by the world around it.



About a month ago, a paper was published regarding an algorithm that is able to represent artistic style. http://arxiv.org/abs/1508.06576

Soon after the paper was published, the open source community quickly went to work to release neuralart. Entire rooms dedicated to art made with artificial neural networks within the art gallery.

For progress on the art gallery, Aussie has been documenting this project from the start. This is how much we've been able to accomplish in 6 days:

Test 2: Test 3: Test 4:



The plan is to have the gallery complete by Thursday, in which I'll update you on plans for the art show and a walk-through of the gallery itself!