

"Building Block" Hypothesis:



A GA works by combining short, low-order,

highly fit schemata (“building blocks”) into

fitter higher order schemata.



- But how would we recognise one if we saw one?

- Building what?

- How many of them are there?

- Just how are they combined together?

- When is recombination beneficial?

- How does the effect of recombination depend on the fitness landscape (and on other operators/parameters)?



I found a Power Point presentation (named), by Chris Stephens , upon some critiques related to Building Block Hypothesis and, also, on the Building Blocks itself. That presentation was held at FOGA 2007 . I paid attention to four slides, I guess they speak by themselves. It's so intelligent and funny! :)Stephens made interesting question on the Building Block Hypothesis, see below:The good old Building Block Hypothesis! I remember that, once, at a evolutionary computation class, I asked the professor how someone could guarantee that the unknown solution of a problem could be achieved through short, low order, above average schemata, if the solution itself is unknown?! What if the solution is neither short nor has low order? Could a genetic algorithm get the optimumworking through strings with below average fitnesses? The professor was so categoric and said that the Building Block Hypothesis really explains the way genetic algorithms work.Well, when the problem has to do with decomposable/separable fitness functions, it should work fine, however put some dependencies among variables and you will see what happens - even for EDAs the dependencies can represent a frightening nightmare, since, when dealing with strong dependencies, the probability distribution that samples the strings can be almost computationally unfeasible to calculate.