The Neuro-Evolution via Augmenting Topologies (NEAT) algorithm enables users to evolve neural networks without having to worry about esoteric details like hidden layers. Instead, NEAT is clever enough to incorporate all of that into the evolution process itself. You only have to worry about the inputs, outputs, and fitness evaluation.

This is a tutorial on how to use SharpNEAT 2, the second version of a popular C# implementation of the NEAT algorithm written by Colin Green. The source code for this tutorial is available here.

In this tutorial series, we'll be evolving neural networks to play Tic-Tac-Toe. I'm going to assume that you already know the basics of neural networks, evolutionary algorithms, and Tic-Tac-Toe. Part 1 will walk you through the basics of how to setup a new experiment, define a fitness evaluator, run the evolution, and use the resulting network in your application.

Tic-Tac-Toe Game Engine

We first need to create a basic game engine that can simulate games between two players. The TicTacToeGame class defines a simple 3x3 game board:

public class TicTacToeGame { public SquareTypes[,] Board { get; set; } public TicTacToeGame() { Board = new SquareTypes[3, 3]; } ... }

Players are required to implement the IPlayer interface and define the GetMove method which takes a game board and returns the move to make:

public interface IPlayer { Move GetMove(SquareTypes[,] board); }

Two hand-coded players are provided:

RandomPlayer - Randomly picks an available square at each move.

- Randomly picks an available square at each move. OptimalPlayer - Plays perfect strategy. The best you can do is draw against it.

The TicTacToeGame class provides a helper method to compete two players against each other:

/// <summary> /// A helper method for competing two players against each other. /// </summary> /// <param name="xPlayer">The player to act first.</param> /// <param name="oPlayer">The player to act second.</param> /// <returns> /// The square type of the winner, or SquareTypes.N if the game /// was a draw. /// </returns> public static SquareTypes PlayGameToEnd(IPlayer xPlayer, IPlayer oPlayer)

Designing the Experiment

Our networks will have nine inputs and nine outputs-- one for each square on the board. The NeatPlayer class takes an IBlackBox object and the player's square type:

/// <summary> /// Creates a new NEAT player with the specified brain. /// </summary> public NeatPlayer(IBlackBox brain, SquareTypes squareType) { Brain = brain; SquareType =squareType; }

The IBlackBox interface is an abstraction for a general model that takes an array of real-valued inputs and outputs an array of real-valued inputs. In our case, we're evolving neural networks, but the code is reusable if you ever want to use a different model type.

Each square on the board will be converted into an integer value and fed into the input nodes. If the square belongs to the opponent, it's -1; if it belongs to us, it's 1; and if it's empty, it's 0:

/// <summary> /// Gets the next move as dictated by the neural network. /// </summary> public Move GetMove(SquareTypes[,] board) { ... // Convert the game board into an input array for the network setInputSignalArray(Brain.InputSignalArray, board);

Once the inputs are set, the model is activated. This takes the inputs and runs them through the model, populating the OutputSignalArray of the model. The available square with the highest output node activation is then chosen as the move:

public Move GetMove(SquareTypes[,] board) { ... // Find the highest-scoring available move Move move = null; double max = double.MinValue; for (int i = 0; i < 3; i++) for (int j = 0; j < 3; j++) { // If the square is taken, skip it. if (board[i, j] != SquareTypes.N) continue; // Set the score for this square. double score = Brain.OutputSignalArray[i * 3 + j]; // If this is the first available move we've found, // set it to the current best. if (move == null) { move = new Move(i, j); max = score; } // If this square has a higher score than any we've // found, set it to the current best. else if (max < score) { move.X = i; move.Y = j; max = score; } } return move; }

Creating an Experiment

The convention in SharpNEAT is to define a class implementing INeatExperiment. An experiment encapsulates all the basic functionality needed to create, setup, and run the evolutionary algorithm. However, for the vast majority of projects, almost all of the code will be identical. To simplify the tutorial, I've created an abstract class, SimpleNeatExperiment, which hides all the common setup code so we can focus on the experiment-specific components. All you have to define in your TicTacToeExperiment class are four simple properties:

/// <summary> /// Defines the setup for the Tic-Tac-Toe evolution experiment. /// </summary> public class TicTacToeExperiment : SimpleNeatExperiment { /// <summary> /// Gets the Tic-Tac-Toe evaluator that scores individuals. /// </summary> public override IPhenomeEvaluator<IBlackBox> PhenomeEvaluator { get { return new TicTacToeEvaluator(); } } /// <summary> /// Defines the number of input nodes in the neural network. /// The network has one input for each square on the board, /// so it has 9 inputs total. /// </summary> public override int InputCount { get { return 9; } } /// <summary> /// Defines the number of output nodes in the neural network. /// The network has one output for each square on the board, /// so it has 9 outputs total. /// </summary> public override int OutputCount { get { return 9; } } /// <summary> /// Defines whether all networks should be evaluated every /// generation, or only new (child) networks. For Tic-Tac-Toe, /// we're evolving against a random player, so it's a good /// idea to evaluate individuals again every generation, /// to make sure it wasn't just luck. /// </summary> public override bool EvaluateParents { get { return true; } } }

Setting XML Parameters

The rest of the parameters are configured via an XML configuration file, "tictactoe.config.xml":

<?xml version="1.0" encoding="utf-8" ?> <Config> <PopulationSize>150</PopulationSize> <SpecieCount>10</SpecieCount> <Activation> <Scheme>FixedIters</Scheme> <Iters>2</Iters> </Activation> <ComplexityRegulationStrategy>Absolute</ComplexityRegulationStrategy> <ComplexityThreshold>50</ComplexityThreshold> <Description> TicTacToe Evolution - The goal is to create an optimal TicTacToe player. Individuals are evaluated against a random player and an optimal player for 10 games each. Scoring is: Loss = 0 Draw = 1 Win = 10 Fitness is the sum of the scores. </Description> </Config>

I'll run through these parameters briefly:

PopulationSize - The number of genomes in the population.

- The number of genomes in the population. SpecieCount - The number of species to divide the population into.

- The number of species to divide the population into. ActivationScheme - How the neural network will be activated. Since NEAT networks can have recurrent connections, iterating through the network multiple times may give you different answers each time. Here's we're fixing the number of iterations to 2. Alternatively, you could set Scheme to Relax which iterate the network until the output between iterations was very small. Typically though, 2 iterations is fine for most problems.

- How the neural network will be activated. Since NEAT networks can have recurrent connections, iterating through the network multiple times may give you different answers each time. Here's we're fixing the number of iterations to 2. Alternatively, you could set Scheme to Relax which iterate the network until the output between iterations was very small. Typically though, 2 iterations is fine for most problems. ComplexityRegulationStrategy and ComplexityThreshold - SharpNEAT uses these parameters to determine when to switch from complexification (i.e., adding nodes and connections during mutation) to decomplexification (i.e., subtracting nodes and connections). A good rule of thumb is about 3 * (inputs + outputs).

and - SharpNEAT uses these parameters to determine when to switch from complexification (i.e., adding nodes and connections during mutation) to decomplexification (i.e., subtracting nodes and connections). A good rule of thumb is about 3 * (inputs + outputs). Description - A text description of our experiment. This is mostly used by the SharpNEAT GUI, which we don't use for this tutorial.

It will probably take some time and tweaking to get a feeling for these parameters. Typically, the tougher the domain, the bigger the population, species count, and complexity threshold.

Creating an IPhenomeEvaluator

The phenome evaluator is where the bulk of the work is done. This is where you put the code that evaluates each neural network based on how it performs in your domain. For our Tic-Tac-Toe evaluator, we're going to play each network for 100 games against a random opponent, half as X and half as O, then play one game as each side against the optimal player. For each game, a win is worth ten points, a tie is worth one point, and a loss is worth zero points.

/// <summary> /// Evaluate the provided IBlackBox against the random tic-tac-toe player /// and return its fitness score. Each network plays 10 games against the /// random player and two games against the expert player. Half of the /// games are played as circle and half are played as x. /// /// A win is worth 10 points, a draw is worth 1 point, and a loss is worth 0 /// points. /// </summary> public FitnessInfo Evaluate(IBlackBox box) { double fitness = 0; SquareTypes winner; OptimalPlayer optimalPlayer = new OptimalPlayer(SquareTypes.O); RandomPlayer randomPlayer = new RandomPlayer(); NeatPlayer neatPlayer = new NeatPlayer(box, SquareTypes.X); // Play 50 games as X against a random player for (int i = 0; i < 50; i++) { // Compete the two players against each other. winner = TicTacToeGame.PlayGameToEnd(neatPlayer, randomPlayer); // Update the fitness score of the network fitness += getScore(winner, neatPlayer.SquareType); } // Play 50 games as O against a random player neatPlayer.SquareType = SquareTypes.O; for (int i = 0; i < 50; i++) { // Compete the two players against each other. winner = TicTacToeGame.PlayGameToEnd(randomPlayer, neatPlayer); // Update the fitness score of the network fitness += getScore(winner, neatPlayer.SquareType); } // Play 1 game as X against an optimal player neatPlayer.SquareType = SquareTypes.X; optimalPlayer.SquareType = SquareTypes.O; // Compete the two players against each other. winner = TicTacToeGame.PlayGameToEnd(neatPlayer, optimalPlayer); // Update the fitness score of the network fitness += getScore(winner, neatPlayer.SquareType); // Play 1 game as O against an optimal player neatPlayer.SquareType = SquareTypes.O; optimalPlayer.SquareType = SquareTypes.X; // Compete the two players against each other. winner = TicTacToeGame.PlayGameToEnd(optimalPlayer, neatPlayer); // Update the fitness score of the network fitness += getScore(winner, neatPlayer.SquareType); // Update the evaluation counter. _evalCount++; // If the network plays perfectly, it will beat the random player // and draw the optimal player. if (fitness >= 1002) _stopConditionSatisfied = true; // Return the fitness score return new FitnessInfo(fitness, fitness); }

Now that we've defined our parameters and our evaluator, the only thing left is to run the evolution.

Running the Evolutionary Algorithm

The main execution loop for SharpNEAT 2 is pretty simple. We create our experiment, load the XML parameters, create an instance of NeatEvolutionAlgorithm, and tell it to start. We also add an UpdateEvent handler that saves the best network to file periodically.

class Program { static NeatEvolutionAlgorithm<NeatGenome> _ea; const string CHAMPION_FILE = "tictactoe_champion.xml"; static void Main(string[] args) { // Initialise log4net (log to console). XmlConfigurator.Configure(new FileInfo("log4net.properties")); // Experiment classes encapsulate much of the nuts // and bolts of setting up a NEAT search. TicTacToeExperiment experiment = new TicTacToeExperiment(); // Load config XML. XmlDocument xmlConfig = new XmlDocument(); xmlConfig.Load("tictactoe.config.xml"); experiment.Initialize("TicTacToe", xmlConfig.DocumentElement); // Create evolution algorithm and attach update event. _ea = experiment.CreateEvolutionAlgorithm(); _ea.UpdateEvent += new EventHandler(ea_UpdateEvent); // Start algorithm (it will run on a background thread). _ea.StartContinue(); // Hit return to quit. Console.ReadLine(); } static void ea_UpdateEvent(object sender, EventArgs e) { Console.WriteLine(string.Format("gen={0:N0} bestFitness={1:N6}", _ea.CurrentGeneration, _ea.Statistics._maxFitness)); // Save the best genome to file var doc = NeatGenomeXmlIO.SaveComplete( new List<NeatGenome>() {_ea.CurrentChampGenome}, false); doc.Save(CHAMPION_FILE); } }

When you run the program, it will periodically print out the current fitness score of the best network. The way we have the algorithm setup, a perfect score of all wins would be 1020 (100 games vs. random + 2 games vs. optimal, with 10 points for a win). However, the optimal player will never lose and the random players will sometimes play optimally as well, just by chance. You can go ahead and stop the evolution whenever you're satisfied that fitness has stagnated.

Playing Against Our Neural Network

To play against the neural network we just evolved, run the NeatTicTacToe program. The only code worth showing here is the code to load the network from file:

private void loadNEATPlayerToolStripMenuItem1_Click(object sender, EventArgs e) { // Have the user choose the genome XML file. var result = openFileDialog.ShowDialog(); if (result != System.Windows.Forms.DialogResult.OK) return; NeatGenome genome = null; // Try to load the genome from the XML document. try { using (XmlReader xr = XmlReader.Create(openFileDialog.FileName)) genome = NeatGenomeXmlIO.ReadCompleteGenomeList(xr, false)[0]; } catch(Exception e1) { MessageBox.Show("Error loading genome from file!

Loading aborted.

" + e1.Message); return; } // Get a genome decoder that can convert genomes to phenomes. var genomeDecoder = _experiment.CreateGenomeDecoder(); // Decode the genome into a phenome (neural network). var phenome = genomeDecoder.Decode(genome); // Set the NEAT player's brain to the newly loaded neural network. _neatPlayer.Brain = phenome; // Show the option to select the NEAT player if(neatPlayerToolStripMenuItem.Enabled == false) neatPlayerToolStripMenuItem.Enabled = true; }

SharpNEAT saves networks as genomes, which have to be decoded back into their phenotypic neural network state. Fortunately, SimpleNeatExperiment provides a handy helper function to get a genome decoder. The rest of the GUI works similar to how it did in the evolution program.

Conclusion

That's it! In this tutorial, we saw how to create a basic experiment in SharpNEAT 2, a powerful framework for evolving artificial neural networks. We created an experiment to evolve a Tic-Tac-Toe AI by playing it against hand-coded opponent strategies. By now you know how to setup your experiment, set parameters, evolve the networks, and handle file I/O.

If you played against our AI, you probably noticed it really sucks. That's okay! The next part of this tutorial will explore some of the details hidden by SimpleNeatExperiment and introduce co-evolution. The result will (hopefully) be a stronger AI opponent.

Source code for part 1 of the tutorial is available here.

Update: Part 2 of the tutorial is now available here.

Footnotes