Main Page | Modules | Namespace List | Class Hierarchy | Class List | Directories | File List | Namespace Members | Class Members | File Members | Related Pages

A usage example

The following example illustrates one possible usage of Teem and explains some basic ideas.

Introduction

Artificial evolution is a population based approach widely used to solve optimization problems. It is an active field of research with many different variants of a same basic principle: Selection of the "best solutions" (according to some fitness criterion) from a population of possible solutions which is varied using different methods, most importantly mutation operators. In allusion to the biological counterpart, the members of this population are called "genomes" and the whole population is sometimes referred to as "gene pool". Evolutionary algorithms (EAs) can be devided into two classes, generational EAs and steady state EAs. Generational EAs create, test and select individuals of one generation without influence between individuals. Selection and reproduction will only take place after all individuals have been tested and their fitness determined. The introduction of this apriori artificial concept has the advantage of allowing independent and, thus, very objective evaluation of single individuals. The fact that selection only takes place after all individuals have been tested generates optimal conditions for fair selection. Steady state EAs on the other hand do not have seperate phases of testing, selection and reproduction but evolution happens much like in a biological system, with individuals interacting and reproducing at any time in an ongoing process - the system is in a "steady state", without a predefined end. In many respects steady state EAs are more complex than their technical counterpart: chaotic interactions may lead to huge variations in individual conditions, completely distorting objective fitness values, kinship between interacting parents and children may create additional dynamics, etc. Proponents of this technique argue that evolution as we know it from nature is not a process of optimisation, although often falsely viewed as one. Rather, nature tends to find many viable solutions that often coexist, forming a dynamic equilibrium to which simple concepts of optimality cannot be applied. In other words, the concept of an optimal rabbit always outsmarting the fox does not exist in nature, but rather optimal rabbit strategy will depend on the population of foxes. In nature, all equilibria are inevitably dynamic. Some people have taken this argument a step further, stating that generation based evolutionary algorithms as an optimisation technique are already or will very soon be surpassed by other optimisation methods in practically all technical applications. In our view, both methods have their merits, which is why we have decided to include both of them in Teem.

Experiment

In the following experiment, we will use a generation based evolutionary algorithm for its simplicity and advantages listed above. In a typical workflow, a genome (e.g. the control program for a robot) is evaluated in a task (e.g. object collection) and its fitness (e.g. the number of objects collected) recorded. Each genome is usually tested a number of times with varying initial conditions to avoid that evolved solutions only work with a specific starting point (e.g. a robot is set at a slightly different angle for each of the 10 "trials"). Once all genomes have been tested, the best ones are selected (and others discarded) and used to create a new generation. At this step, a number of operations (e.g. mutation, crossover, duplication, ...) may be used to create some variation in the new population.

The robot task

We want to evolve a robot capable of pushing an obstacle towards one side of an arena. In our example, the robot has two wheels, some infrared sensors to detect the object and a 1 pixel camera to see the white arena wall towards which it should push.

Setting up the experiment

We will use an artificial neural net to control the robot. Neural nets have the advantage of being simple to implement, even on real robots with limited hardware capabilities and usually show graceful degradation, i.e. slightly different inputs tend to lead to similar outputs, which allows to gradually find a solution and also makes differences in hardware and software less important. In our case, its inputs will be the values of the infrared sensors and the pixel of the camera, its output the two wheel speeds. Our genome will be the weights of the neural net. Our goal will be to find a combination of weights that allows the robot to find the object in the arena and push it towards the nest. At the beginning of the experiment we create a set of random controllers (e.g. 100 genomes composed of random numbers).

Testing controllers

Each of the controllers is then transferred to the robot and tested for a predefined period of time (e.g. using the Enki simulator to model its infrared sensors, camera and collisions with walls and object). At the end of each trial, we will evaluate the controller, i.e. judge how well it has performed. To do this, we need to define a fitness function.

Defining a fitness function

A good fitness function should allow the robot to gradually improve (i.e. 12 out of 100 points rather than "success" or "failure"). Defining a good fitness function is one of the most important parts in setting up an evolutionary robotics experiment. In our case, we will use the distance the obstacle has been moved towards the nest as the fitness. This allows even relatively bad controllers to get some fitness and increases our chances of evolving suitable behaviours. Another important concept in this context is that of a fitness landscape, which is an imaginary plane where each point represents a possible combination of genes and the height of the landscape at that point corresponds to this genome's fitness. The evolution of a genome can then be understood as a gradual walk through this landscape, with a tendancy to walk upwards towards the summits ("optima"). Ideally, the fitness landscape should be smooth, i.e. consist of rolling hills rather than steep cliffs and valleys to allow for a gradual walk upwards.

Multiple trials

After testing the first genome for 10 trials with varying starting conditions, we will sum the fitness of all trials and record it for this genome. We then repeat the same procedure with all other genomes.

Selection and reproduction

After all controllers have been tested, we choose some of them (say, the best 30%) to produce the next generation. Again, there are different ways to proceed, but in our example we simply pick one of the best genomes at random. This genome is then mutated (i.e. some of its numbers are changed) with a low probability (e.g. .5%) and copied to the new generation. In addition, we may also wish to keep the best controllers ("the elite") without any changes (this is usually called "elitism"). Once we have recreated the same number of genomes (100 in our example), the process of testing will start over for this new generation.

Judging progress

After some generations (20-2000, depending on the complexity of the task), the fitness values of the best and average controller will stagnate, which usually means that the algorithm has converged on some kind of solution. Unlike many mathematical optimization methods, evolutionary algorithms will not yield the same result when they are rerun. In most cases, it is thus a good idea to repeat experiments a few times.

Troubleshooting

In some cases, suitable controllers fail to evolve.

In some cases, the fitness stubbornly remains close to its initial value and suitable controllers fail to evolve. This is generally referred to as the "bootstrapping problem" and may be due to a number of different reasons. The most probable cause is an ill-defined fitness function (i.e. one that does not allow controllers to gradually reach a solution). If the task is simply too hard and no good smooth fitness function can be found, incremental evolution may help. In this case, controllers are first evolved for a subset of tasks or on an easier task and the bar is gradually raised higher as evolution progresses. Another possible cause of bootstrapping problems is a small initial population that fails to cover a large enough proportion of possible solutions.

Sometimes fitness will vary greatly between trials. Again, the fitness function may be at fault, but some tasks require a high number of trials to eliminate large statistical fluctuations when evaluating the controllers. Large fitness variation between generations is usually due to a high mutation rate.

Another common problem is fast convergence on a sub-optimal solution (i.e. on a local minimum in the fitness landscape). Causes may be a small gene pool, or a bad fitness function which leads to a rugged fitness landscape.


Generated on Mon Oct 24 17:38:26 2005 for Teem by  doxygen 1.4.2