This will be a look under the hood of Pixling World, an artificial life/evolution simulator/god simulator I’m building. As a player you take the role of an old-school deity, who gets to create his/her own world, put some creatures (“Pixlings”) into it and then breathe life into them. You can’t control the behavior of the creatures directly, as it’s evolved through real darwinian evolution over time, but as a god you can give them abilities to interact with each other and their environment. The goal is to create worlds that fascinate yourself and others.

When I’ve been posting about it online (post on /r/javascript and post on /r/proceduralgeneration) people have asked me “how does it work”, so I figured I’d give an overview in this blog post. First I’ll give a brief overview of the game, then I’ll talk about how neural network evolution works and finally I’ll get into the technical implementation of all this.

The game

For anyone not familiar with the game, here’s what it looks like:

Playing around with water dynamics in Pixling World

A couple of things to note:

A Pixling is a single life form that occupies one square in the world grid.

Each Pixling has a neural network that controls their behavior.

I can currently run over one million Pixlings at 60 steps per second on my GTX1070 laptop (even though it’s web based).

It’s real darwinian evolution. There are no “goals” for the Pixlings, whoever spreads more spreads more.

The reason I’m excited about this project is because I don’t really know what the limits of the Pixling behaviors are right now. Can they evolve to collaborate? Communicate? Make plans? Create complex civilizations? I don’t really know. There are already a couple of cool evolved behaviors among worlds published by players, such as predators and prey, foraging and nesting and using smell to find objects (I created that one).

Example of Pixlings that have evolved to find “apples” based on the green “smell” they emit

As a player you get to build your own world and configure exactly what kind of universe you want your Pixlings to live in. You define environments, set up rules and give your Pixlings abilities. Here’s what it looks like to build a world as a player:

Editing your world in Pixling World

Once you have a first version of your world, you can start putting Pixlings into it and see what behaviors they evolve.

How does neural network evolution work?

As I mentioned before, each Pixling has a unique neural network which controls their behavior. A neural network, for those not familiar, is a function that takes some input, runs it through a matrix of numbers, and produces an output. In this case the inputs to the Pixlings neural network, or their “brain” if you like, is information about their environment and other Pixlings around them. The output is a decision; which action should the Pixling take next.

Neural networks have what’s called “parameters” or “weights”, which control exactly how they process the input. These parameters are just a big array of numbers, so for instance a Pixling with a brain that looks like this: [5, 3, 9, 2] may produce the decision “Eat” in a certain situation, whereas another one with parameters [2, 9, 3, 8] may produce the decision “Sleep” in the same situation. It’s all up to their parameters.

In the beginning the parameters of the Pixlings are completely random, and therefore their behaviors are random. However, some of them usually have a behavior that at least enables them to eat and reproduce. And this is the seed of evolution. Once you have a Pixling that can reproduce, its children will inherit the parameters of the parent and thus have the same behavior.

With one twist; the parameter of the child will be slightly randomly tweaked. For instance a parent with parameters [4, 9, 2, 1] may produce a child with parameters [4, 8, 2, 1]. This child will behave slightly different from its parent. If this new behavior turns out to be beneficial in the environment, this child will reproduce more than its siblings, and thus this behavior will spread. Then a grandchild of it may have a a random change that makes it even better adapted to survive in the world, and so on. This is the basis of darwinian evolution.

Eventually you end up with Pixlings that can have really complex and “meaningful” behaviors, all starting from random behaviors. Neural networks are what’s called “universal approximators”, which means there’s nothing, in theory, they can’t do as long as they have enough parameters. (That said, the networks are fairly small in Pixling World, so don’t worry, they won’t become sentient AI’s just yet :)

So how is this implemented?

At a high level, the game consists of a “game state” which stores everything about the game, a simulation engine which takes the game state and advances it one step and a renderer which renders the game state to the screen. On top of this is a UI (built in React) that is used to configure the simulation and game state.

The most important thing to know is that the entire simulation runs on the GPU. This is the main reason why it can quickly simulate large numbers of Pixlings at the same time; GPUs are built for processing massive amounts of information in parallel.

Since it’s not a very traditional game I opted for writing everything from scratch. And since I wanted to run it all on the GPU I figured WebGL would work about as well as any native alternative (I’m actually using WebGL2 because WebGL1 was a bit too limiting). The codebase is in Typescript.

Game state

The game state looks something like this:

The entire state consists of three textures: Environments, Pixling state and Pixling neural network parameters. Environments are defined by the user; one slice per environment. The Pixling state is a combination of predefined properties such as Alive, Species and LastAbility, and whatever properties the user define for the Pixlings such as Energy, DaysSinceBirthday, NumberOfFishInPocket etc. The Pixling neural network parameters are, as discussed above, evolved.

Each texture is stored as a R32F 2d array textures in WebGL, i.e. textures with a width, height and depth and a single floating point value per texel. So for instance, the Magma environment at cell (x: 15, y: 23) has a value of 19.3. Or the Pixling state texture at (x: 4, y: 19) of the alive slice (index 0) has a value of 1, which means the Pixling of cell (x: 4, y: 19) is considered alive.

Simulation

The overall simulation loop looks something like this:

Each step:

- Run all user defined environment computations

- Build inputs to neural networks

- Run forward pass on neural networks

- Run all user defined rules and abilities

- Handle movement and reproduction

All of those steps involve what I call Computations. A Computation takes a number of texture and variables as inputs, runs a function over that and writes the result to one or more textures. Then another Computation uses that output as its inputs, and so on; the simulation is basically just a big graph of Computations into Computations. The functions are implemented as GLSL fragment shaders (a function that runs per pixel on the GPU), so the whole Computation runs asynchronously on the GPU.

Many of the Computations use the game state as input and output, but there are also a number of secondary buffers used, for instance to keep the result while performing the forward pass of the neural network or to keep track of how to move Pixlings in the next update. I put an example Computation at the end of this blog post for anyone interested.

One of the core reasons the simulation can be very fast is because at no point does the simulation need to synchronize with the CPU. The Computations are sent to the GPU for processing but we don’t need to wait for the result, we can just keep sending Computations to the GPU. Since it’s very cheap to schedule the computations, the game is GPU bound most of the time.

Computations are generated for each environment, rule and ability that the user defines (i.e. it’s generating GLSL code from what you define in the UI). At the beginning of the game loop the computations of the environments are run. These are often things like combining the values of two environments, running a blur filter or adding noise to an environment.

Next the neural networks are run. First inputs are gathered into one big vector. Then each layer of the network is run and finally an argmax is run on the output of the network to decide what the next action of the Pixling will be. I put the code for the dense layer in the appendix for people who are interested.

After this I run all of the rules and abilities, where of course for each Pixling only the ability chosen by the network is invoked.

Finally I move and clone Pixlings that indicated they wanted to during the ability and rule computations. Since it’s all done in parallel in shaders, there’s a bunch of code to handle movement and reproduction. At a high level it works by first computing what I call “deltas”, which are a vector for each cell that indicates a unique neighboring cell that it “points” at. I also calculate the inverse of this. This can then be used to “move” or “copy” a Pixling from one cell to another, all in parallel. Currently the deltas are all random, but in the future it may be interesting to explore letting the neural network decide its delta (right now it can decide where to go only by switching between “don’t walk” and “walk randomly”, which works since it gets information about where it’s about to walk, as seen in the Apple hunters example, but could be even faster).

Rendering

Once the simulation has run a step (or a number of steps if you’re on the Extreme speed setting), I render the results to the screen. Rendering looks something like this:

render:

draw each environment that is under the Pixlings;

draw the current selection rectangles;

draw Pixlings;

draw each environment that is over the Pixlings;

The rendering is more or less just single quads that cover the entire screen (one quad per environment, one for the Pixlings) and most of the work happens in shaders. This means that each pixel on the screen is calculated fairly similarly independent of zoom level or position, so you get a really smooth zoom experience:

Zooming in and out on a fairly large map, on my MacBook Air.

Sampling & Metrics

Another big area of the game is sampling and metrics. Sampling is what I use to keep track of species in the world; basically I now and then record the entire game state and figure out who is a descendant of whom in a web worker. Metrics is calculated by running a “reduce” Computation which takes a texture and halves it in size, each pixel being the sum of its parent four pixels. That’s repeated until the texture is small enough to be moved fairly quickly to the CPU (this is expensive so only happens every 100 steps). This is how I can count the population for instance.

That’s all!

There’s a lot more I could write about Pixling World but perhaps this will be enough for a rough overview. There are also a lot of things I’m excited to try adding to Pixling World: LSTMs (Pixlings with memory would be so cool), a way for Pixlings to “see” more of the world and have better control of their movement, some more game like things like items and traits, manual training of their neural networks (perhaps take control of one and use that for backprop) and much much more. If anyone is interested in either hearing more about any specific area of Pixling World, or have suggestions for ways to take it forward I’m all ears; drop a comment here or on the reddit thread.

And finally, if you’re interested in trying the game you can do so at https://pixling.world (alpha stage so expect some bugs weirdness).

Thanks for reading!

/Fredrik

(To get news and updates about Pixling World you can follow the project on Twitter and Reddit)

Appendix

A challenge: Pixling Battlegrounds

In preparation of this article I was working on a world which I was hoping would exhibit some interesting behaviors, but rather than finishing it I realized it may be more fun to put it out as a “challenge”. So for anyone inclined; here’s a world you can try to see if you can complete: https://pixling.world/4YUmyd5eZUgzPVHW73Uvk6 (fair warning: I have no idea if it’s possible to complete or not)

Example of a Computation

This takes the output of the neural network (brainRes), computes the argmax of it and stores it in cellProperties (the Pixling state). Since it both reads and writes from cellProperties it’s double buffered and the result is automatically copied back to the back buffer.

// GLSL code for this computation: in vec2 texcoord;

out vec4 out_value; uniform sampler2DArray brainOutputs;

uniform sampler2DArray cellProperties; void main() {

float maxv = texture(brainOutputs, vec3(texcoord, 0)).r;

int maxi = 0;

for (int i=1; i < ${abilitySlotsSize}; i++) {

float val = texture(brainOutputs, vec3(texcoord, i)).r;

if (val > maxv) {

maxv = val;

maxi = i;

}

}

float ability = texture(cellProperties, vec3(texcoord, ${abilititySlotsStart} + maxi)).r;

out_value = vec4(ability, 0.0, 0.0, 1.0);

} // Using this computation in the app: computeCopyBack(

this.copyMultiLayer,

this.computations.abilitiesArgmax,

this.config.worldSize,

outputs2darray(

this.state.textures.cellProperties[0],

FixedCellProperties.InvokingAbility, 1),

this.state.textures.cellProperties[1],

{

texture2darrays: {

brainOutputs: this.state.textures.brainRes[this.state.brainResRW.read],

cellProperties: this.state.textures.cellProperties[1]

}

});

Dense layer code

This code generates a shader that can compute outputs for multiple nodes in the network at the same time (up to maxColorAttachments).