Image Dark Matter

There comes a time in everyone’s life where they decide to grow up and do a thing. I recently decided to do a thing that had always seemed a bit too far in the aesthetic realm to be any concern of mine. That thing was image manipulation using canvas. As it turned out, this attempt became much more about data than anything.

ImageDarkMatter • Usage

The core idea behind this program is to take a handful of source images and combine their pixel data into a new image that represents a blend of their characteristics. The term Dark Matter represents the unobservable majority of matter in the universe. Its existence is only inferred by other observations and aside from that, it is intangible.

This program is called Image Dark Matter because in my imagination I would like to believe there is substance that transcends given images that are taken at different points in time. I took a handful of images at property my parents recently purchased in Cross Plains, WI (more on that here) and used some of them as source data for this program.

My goal was to combine these images together and abstractly represent my thoughts towards the property, my family, and nature in general at that point in time. That representation is very much Dark Matter in that it is only abstract, inferable, and intangible.

Here are the four images I used as source data:

Enough of the mumbo jumbo, lets get to it.

Canvas and getImageData

For those who are reading this with zero reference for canvas, this section is for you. Essentially an html5 canvas is a pixel playground you can create with JavaScript.

var canvas = document.createElement("canvas"); canvas.width = 100; canvas.height = 100; document.body.appendChild(canvas); var context = canvas.getContext("2d");

That’s the barebones way to create a canvas with JavaScript. canvas is the literal <canvas> html element. context is a way to access the canvas . If we wanted to draw an image to the canvas, we would do the following on the context:

var image = new Image(); image.src = "path/to/image.jpg"; image.onload = function() { context.drawImage(image, 0, 0); // (image, x, y) }

This would draw an image to our canvas’ context . The best way to think about canvas is like the “flatten layers” command in Photoshop. Every time you do something on a canvas , it is just pixels. There is no "layer" in canvas , you simply make more canvases if you really need to layer things.

Now that we have a flat image on our canvas , we can use getImageData() to grab a rectangle of image data at x, y coordinates. The parameters in getImageData() are x, y, width, height . If I wanted the top left 50×50 square, i would do the following on the context :

var top_left_50_by_50_square = context.getImageData(0, 0, 50, 50);

Now, this data is represented by a JavaScript object. The object has three values. Our top_left_50_by_50_square looks something like the following:

top_left_50_by_50_square = { width: 50, height: 50, data: [ 10, 76, 210, 255, 46, 19, 12, 255 // a bunch more stuff... ] }

The above object is probably fairly easy to understand, except the data bit. You may be able to guess based on the 255 that I put in there. data is an array of rgba values. rgba is red, green, blue, alpha. alpha is a 0-255 representation of transparency with 255 representing fully opaque and 0 representing fully transparent. This is different than CSS rgba() where alpha is 0-1 .

Think of the data as a merged sequence of pixel rgba.

[ R1, G1, B1, A1, R2, G2, B2, A2, // ... and so on for each pixel ]

Reading this data could look something like this:

var data = top_left_50_by_50_square.data; var colors = []; for(var i = 0, loop = data.length; i < loop; i += 4) { var color = { r: data[i], g: data[i + 1], b: data[i + 2], a: data[i + 3] }; colors.push(color); }

The point here is, that’s all it takes to get and read pixel data from an image. You don’t need to be a wizard to know the rgba value of every pixel in an image.

Combining Image Data

Now that we can to grab all the data from the source images, We need to be able to convert it into a format that is a lot easier to manipulate. The form I chose is hsl . hsl stands for hue, saturation, and lightness. hsl is much more programatically descriptive than rgb . Quick, tell me how light rgb(100, 20, 65) is. Exactly.

To do the conversion from rgb to hsl , I actually did what most good programmers do and borrowed it from someone else. This StackOverflow question led me to these Axon Flux rgb-hsl and hsl-rgb methods. Voila. Done.

#programming-secrets.

To not choke the browser (the thing still runs as slow as molasses and would apparently be sped up significantly by WebGL), I parse the image in 50px square blocks. I loop over the blocks of pixels, and in each block I loop over the pixels. When in that pixel loop, there are four methods I created to manipulate a pixel: average, middle, select, and random.

Average

This averages the hsl values per pixel. To do this, when each pixel is determined, you take the hsl value for each image and average each of their h , s , and l values together.

var pixel_hue = average_hue_for_this_pixel; var pixel_sat = average_sat_for_this_pixel; var pixel_lit = average_lit_for_this_pixel;

Middle

This gets the midpoint hsl values per pixel. To do this, when each pixel is determined, you take the hsl value for each image and get the value directly between the highest and lowest. Note that the greens are more blue in this approach and that the lightness is more blobby than “average”, as evidenced by the grayscale. With this source data, the result is fairly close to the “average” algorithm output.

var pixel_hue = middle_hue_for_this_pixel; var pixel_sat = middle_sat_for_this_pixel; var pixel_lit = middle_lit_for_this_pixel;

Select

This randomly selects a block of pixels for the given coordinate. There is no hsl manipulation other than grayscaling it if the option is true .

var random_image = images[Math.floor(Math.random() * images.length)]; var pixel_hue = random_image.hue_for_this_pixel; var pixel_sat = random_image.sat_for_this_pixel; var pixel_lit = random_image.lit_for_this_pixel;

Random

This randomly selects hsl values per pixel. To do this, when each pixel is determined, you randomly choose an image for each h , s , and l value. You could grab hue from image 1, saturation from image 2, and lightness from image 4. The result is a pixel color that is most abstractly based on the source images.

var random_image_1 = images[Math.floor(Math.random() * images.length)]; var random_image_2 = images[Math.floor(Math.random() * images.length)]; var random_image_3 = images[Math.floor(Math.random() * images.length)]; var pixel_hue = random_image_1.hue_for_this_pixel; var pixel_sat = random_image_2.sat_for_this_pixel; var pixel_lit = random_image_3.lit_for_this_pixel;

Try it out

A great thing about CodePen is that I can just give you a link to a new Pen from a Template and you can make this yourself with your own images. Just click this link and customize params to your liking.

Why do it?

I would be remiss if I didn’t say something about what I was pondering the whole time I made this.

Everything is terrifying. Whether it’s comprehending your loops, conserving memory, fearing you aren’t building something the “right way”, or just the general scariness of an idea you have no clue how to turn into a reality, it is always terrifying when you aren’t doing it. For that reason alone, I decided to make this thing. It does not produce something that you cannot produce with Photoshop. It does not necessarily make things that look good. It isn’t that efficient. Were those to be the reasons with which I justified this endeavor, I would have stopped 5 minutes in simply because it would not have been worth it.

I made this because I did not know how to. I did not even necessarily want to. None of this code applies to anything that I am currently working on and yet somehow value still remains in challenging yourself to do things simply because it is a challenge to do them.

This is why I love to code, this is why CodePen is great. You should make things.

I am Jake Albaugh and am going to write this bio in first person. These days, I write on CodePen because I care more about it and you than I do about my personal site. Read more of my CodePen Posts. View my work on my CodePen profile. Or if you’re a hip millennial, “get at me” on my twitter @jake_albaugh.