Intro

I took up a small pet project over summer that I called Roee. It was very loosely based on a paper I’d read by Susan Stepney and Tim Hoverd, Reflecting on Open Ended Evolution.

I also had the opportunity to hear Susan Stepney talk about this paper in person at ECAL 2017 (but didn’t actually go back to read it until this June). The paper is quite short, I recommend checking it out but I’ll also go over the gist here. Note that I am merely an interested layperson in the area of life simulation, and that it is very possible that I have misinterpreted aspects of the paper. I recommend reading it yourself - it’s not very long.

Reflecting on Open Ended Evolution proposes an architecture to facilitate open-endedness in life simulations. Some of the overview is already covered in my original notes from Susan Stepney’s talk in 2017, which can be found here

The gist is that the authors define open-endedness as the production of novelty and potential for increasing complexity in a simulation. They propose using computational reflection to achieve this via an architecture consisting of three main parts making up a bootstrap simulation which then grows on its own:

The original seed application

The observer-reifier-modifier (ORM) intentionaliser

The virtual machine (VM)

Out of the three parts above, the only thing that should not be able to modify itself is the VM. Both the original seed application and the ORM intentionaliser should be open to self-modification.

Back to top

The ORM Intentionaliser

The ORM intentionaliser consists of three main parts, each bootstrapped for the initial run with the expectation of growing on their own from there:

Observers

Observers recognize new emergent structures and behaviours in a simulation.

Reifiers

Reifiers intentionalise the new observations and define new types which are to be implemented in the simulation.

Modifiers

Modifiers modify the simulation to exploit the reified structures.

Key to the overall architecture is the fact that the simulator is reflective, not just at the core agent level, but throughout. Hence a bootstrap observer (for example) can observe not only novel agent patterns, but also novel observation, reification, and modification patterns, which can then be reified and modified appropriately.

Back to top

Questions

Even after hearing Stepney talk about the paper in person and then reading it, I had questions. I had questions about both the proposed implementation and the principle behind it. In the talk this ability to technically facilitate open-endedness was positioned as “the last big leap” in true open ended life simulation. But is observing emergent behaviour really the last big leap? How can the observations and modifications of the simulation based on them be truly open ended if our bootstrapped simulation’s behaviour hasn’t specifically been designed for open-endedness, and isn’t that the first problem - how to implement a simulation in a way that will exhibit an unlimited number of potential emergent behaviours to observe in the first place?

In the talk Susan Stepney mentioned (I am paraphrasing from memory) “If a bird turns into an ant, we need to be able to recognize that and modify the runtime to define an ant on its own.” - isn’t the first problem building a simulation that would, unplanned, have the potential to turn a bird into an ant to observe the behaviour in the first place?

Anyway, as a layperson to life simulation I had a lot of questions. Usually the best way for me to get a better grasp of something is to try to build it, or some part of it. I didn’t go into this with the expectation of implementing the entire proposed architecture or even any part of it, really - I just went in to play around with some of the concepts mentioned. Specifically, I wanted to focus on being able to recognize certain patterns in a very simple bootstrapped simulation and then solidify those emergent patterns as “first class citizens” of the simulation by modifying the code.

Back to top

Self-modifying simulation in Go?

The paper suggests the following examples as suitable computationally reflective languages for such an architecture: Lisp, Prolog, Python, Ruby, and JavaScript. I chose to use none of them. Instead I chose Go, since this was a fun project, since I did not expect to really have to implement the full thing, and since my main simulation project (SnailLife) is also written in Go and I figured I might pick up some reusable learnings. The feedback I got when discussing this with people who have more experience with Go than I do was that maybe Go wasn’t the most suitable language for this task. And I fully agree with that, knowing this as I was going in. Nevertheless, I picked Go because I felt like it for the reasons mentioned before. For a fun vacation project I didn’t really feel any obligation to try to pick the right tool for the job.

Back to top

What are we bootstrapping?

For the main simulation itself I had a small idea for the bootstrapped initial version:

Model (Meta-Model) : Thing (Agent), Copy (Instruction), Del (Instruction), SimpleReporter (Reporter), AggregateQualifier (Qualifier), Grid (no meta-model)

: (Agent), (Instruction), (Instruction), (Reporter), (Qualifier), (no meta-model) Other Bootstrapped Meta-Models: Aggregate (no model provided),

As a recap, the paper dictates that both models and meta-models should have the ability to change at runtime. First of all I won’t have anything changing at runtime considering I’ll be stopping the simulation and restarting it for each modification (well not I, but the simulation will be stopping and restarting itself).

Second of all for this initial “hackaround” (I say “experiment” above, but in the loosest possible sense - it doesn’t quite suit the lack of formality or plan in this project :)) I was aiming to focus on models only. No modifications of meta-models will take place.

For the ORM Intentionaliser the bootstrap was to focus on the Aggregate meta-model and what we can do with it:

Model (Meta-Model): aggregationObserver (Observer), aggregationReifier (Reifier), aggregationModifier (Modifier)

Aggregates seemed like the most straightforward kind of meta-model to start with. It feels a little bit like cheating - is it really emergent, novel, open-ended behaviour if we’re just recognizing patterns between existing agents and grouping them together? I don’t know. But what is life if not just a collection of aggregates of aggregates of aggregates…? Anyway, I went with aggregates.

All models are struct s and all metamodels as interface s.

Back to top

Vague definitions and bad practices

I defined everything as…vaguely…as possible. The paper mentions embracing “the biologically-inspired ‘messiness’ of deliberately mixing layers of abstraction”.

I regularly make use of interface{} input and output parameters here since just because that parameter may always be a string now doesn’t mean it can’t evolve into something other as the simulation evolves. I wanted to allow for as much flexibility as possible without forcing my simulation to handle extending itself in a way that would allow for more flexibility as needed. If some modifier somewhere decides the Uid for this new agent is going to be an int64 and not a string , whatever takes that uid should already support that type.

Of course this also caused problems for me as I ran into issues I would have avoided if I had been stricter with my structure. Also I really have no excuse for implementing zero testing for this…I decided for a quick hack project that would not be my area of focus.

Back to top

The World

The world is an X x Y grid populated with N agents. The world is set to run for a certain number of ticks. Each agent starts off with one randomly chosen instruction: Copy or Del . Every tick, agent.Do(params interface{}...) runs for each agent, at which point we loop through all the instructions the agent has (again, in the bootstrapped simulation this is one but in reality it is a []Instruction , just in case we ever end up with more than one).

Populating the world

World population happens like so:

func (g *Grid) Populate(num int) { g.Population = make(map[string]Agent) for i := 0; i < num; i++ { at := getRandomAgentType() agent := reflect.New(at).Interface().(Agent) it := getRandomInstructionType() ins := reflect.New(it).Interface().(Instruction) ins.Init() agent.Init(ins) posX := rand.Intn(g.X) posY := rand.Intn(g.Y) pos := Pos{X: posX, Y: posY} agent.SetId(i) agent.SetPos(pos) g.AddAgent(agent) } }

Note that even though we only have one model defined which implements the Agent metamodel ( Thing ), the above accounts for creating any kind of Agent metamodel type. This is because we have a slice which holds all possible types that could be valid to instantiate here (which kind of does not sit well with the paper’s intentional vs extensional definition objective, but for my purposes that’s where we are). The same thing happens with instructions - we have a slice of all possible instruction types to choose from.

Back to top

Instructions

A Copy instruction will create a new instance of the type of the Agent passed to it, set the same ID for it as the original parent Agent, and then set a position adjacent to the parent Agent. It will then give the new agent a random instruction and add the new agent to the world. So really the only “copied” part of this copy is the agent Id.

As you can see here we’re using reflection to create a new agent of the type of the parent agent. We could’ve just copied the struct, but that would mean having each agent implement a Copy method to make sure slices etc. are properly copied (vs for example passing slice headers around with the same backing array). Since I want further self-modification to be as simple as possible and since the copy isn’t actually a real copy aside from the ID, I decided not to go that route.

func (i *Copy) Do(params ...interface{}) { agent := params[0].(Agent) world := params[1].(*Grid) cPos := agent.GetPos() direction := direction(rand.Intn(5)) newAgent := reflect.New(reflect.ValueOf(agent).Elem().Type()).Interface().(Agent) newAgent.SetId(agent.GetId()) switch direction { case up: newAgent.SetPos(Pos{X: cPos.X, Y: cPos.Y - 1}) case down: newAgent.SetPos(Pos{X: cPos.X, Y: cPos.Y + 1}) case left: newAgent.SetPos(Pos{X: cPos.X - 1, Y: cPos.Y}) case right: newAgent.SetPos(Pos{X: cPos.X + 1, Y: cPos.Y}) } it := getRandomInstructionType() ins := reflect.New(it).Interface().(Instruction) newAgent.Init(ins) world.AddAgent(newAgent) }

A Del instruction will pick one adjacent cell to the agent and delete whatever agent may be residing there, if any.

func (i *Del) Do(params ...interface{}) { agent := params[0].(Agent) world := params[1].(*Grid) cPos := agent.GetPos() direction := direction(rand.Intn(5)) var deletionPos Pos switch direction { case up: deletionPos = Pos{X: cPos.X, Y: cPos.Y - 1} case down: deletionPos = Pos{X: cPos.X, Y: cPos.Y + 1} case left: deletionPos = Pos{X: cPos.X - 1, Y: cPos.Y} case right: deletionPos = Pos{X: cPos.X + 1, Y: cPos.Y} } world.DeleteAgent(deletionPos) }

Back to top

Running the world

After population, here’s what happens when we run:

func (g *Grid) Run(ticks int) { g.RemainingTicks = ticks reporter = NewReporter("") g.startTime = time.Now().Unix() for i := ticks; i > 0; i-- { g.processTick() } reporter.finalizeReporter() } func (g *Grid) processTick() { g.Tick++ fmt.Printf("

Tick: %d", g.Tick) g.processAgents() g.runQualifiers() reporter.reportTick(g) g.updateChannels() g.RemainingTicks-- }

During agent processing we not only run Do() for each agent in the world, but also run it through every existing aggregate’s Qualifier to see if it meets the membership criteria for that aggregate (or if it still meets the membership criteria if it is already part of the aggregate). The agent then gets added to or removed from the aggregate (or neither).

We also run all the qualifiers after processing agents. At the beginning of this simulation we start off with no qualifiers. Right now qualifiers are only used to recognize and instantiate new aggregate types. Since we start off with no aggregate types, we also start off with no qualifiers. But when the simulation recognises an aggregate and modifies itself, a new qualifier is created. These basically set the membership criteria for a type. An aggregate type has an AggregateQualifier .

At the end we run SimpleReporter which generates a json file with all the data for the tick. This file will include all the exported fields in the grid (in Go exported fields are the fields starting with an upper-case letter):

type Grid struct { X int Y int Population map[string]Agent Aggregates []Aggregate Tick int RemainingTicks int startTime int64 obsChans []chan Grid }

After the reporter runs, we update the observer channels. Since currently we only have one observer ( AggregateObserver ), that is the one that will be updated.

We make a copy of the grid for the observers, especially the population:

func (g *Grid) updateChannels() { dc := *g dc.Population = make(map[string]Agent) for k, v := range g.Population { dc.Population[k] = v } for _, c := range g.obsChans { c <- dc } }

I think this post is long enough for now…in the next one I’ll go into my tiny version of the ORM Intentionaliser, and how we actually observe and reify emergent aggregates in the simulation.

Back to top