The algorithms module is intended to contain some specific algorithms in order to execute very common evolutionary algorithms. The method used here are more for convenience than reference as the implementation of every evolutionary algorithm may vary infinitely. Most of the algorithms in this module use operators registered in the toolbox. Generally, the keyword used are mate() for crossover, mutate() for mutation, select() for selection and evaluate() for evaluation.
You are encouraged to write your own algorithms in order to make them do what you really want them to do.
These are complete boxed algorithms that are somewhat limited to the very basic evolutionary computation concepts. All algorithms accept, in addition to their arguments, an initialized Statistics object to maintain stats of the evolution, an initialized HallOfFame to hold the best individual(s) to appear in the population, and a boolean verbose to specify whether to log what is happening during the evolution or not.
This algorithm reproduce the simplest evolutionary algorithm as presented in chapter 7 of [Back2000].
Parameters: |
|
---|---|
Returns: | The final population |
Returns: | A class:~deap.tools.Logbook with the statistics of the evolution |
The algorithm takes in a population and evolves it in place using the varAnd() method. It returns the optimized population and a Logbook with the statistics of the evolution. The logbook will contain the generation number, the number of evalutions for each generation and the statistics if a Statistics is given as argument. The cxpb and mutpb arguments are passed to the varAnd() function. The pseudocode goes as follow
evaluate(population)
for g in range(ngen):
population = select(population, len(population))
offspring = varAnd(population, toolbox, cxpb, mutpb)
evaluate(offspring)
population = offspring
As stated in the pseudocode above, the algorithm goes as follow. First, it evaluates the individuals with an invalid fitness. Second, it enters the generational loop where the selection procedure is applied to entirely replace the parental population. The 1:1 replacement ratio of this algorithm requires the selection procedure to be stochastic and to select multiple times the same individual, for example, selTournament() and selRoulette(). Third, it applies the varAnd() function to produce the next generation population. Fourth, it evaluates the new individuals and compute the statistics on this population. Finally, when ngen generations are done, the algorithm returns a tuple with the final population and a Logbook of the evolution.
Note
Using a non-stochastic selection method will result in no selection as the operator selects n individuals from a pool of n.
This function expects the toolbox.mate(), toolbox.mutate(), toolbox.select() and toolbox.evaluate() aliases to be registered in the toolbox.
[Back2000] | Back, Fogel and Michalewicz, “Evolutionary Computation 1 : Basic Algorithms and Operators”, 2000. |
This is the evolutionary algorithm.
Parameters: |
|
---|---|
Returns: | The final population |
Returns: | A class:~deap.tools.Logbook with the statistics of the evolution. |
The algorithm takes in a population and evolves it in place using the varOr() function. It returns the optimized population and a Logbook with the statistics of the evolution. The logbook will contain the generation number, the number of evalutions for each generation and the statistics if a Statistics is given as argument. The cxpb and mutpb arguments are passed to the varOr() function. The pseudocode goes as follow
evaluate(population)
for g in range(ngen):
offspring = varOr(population, toolbox, lambda_, cxpb, mutpb)
evaluate(offspring)
population = select(population + offspring, mu)
First, the individuals having an invalid fitness are evaluated. Second, the evolutionary loop begins by producing lambda_ offspring from the population, the offspring are generated by the varOr() function. The offspring are then evaluated and the next generation population is selected from both the offspring and the population. Finally, when ngen generations are done, the algorithm returns a tuple with the final population and a Logbook of the evolution.
This function expects toolbox.mate(), toolbox.mutate(), toolbox.select() and toolbox.evaluate() aliases to be registered in the toolbox. This algorithm uses the varOr() variation.
This is the evolutionary algorithm.
Parameters: |
|
---|---|
Returns: | The final population |
Returns: | A class:~deap.tools.Logbook with the statistics of the evolution |
The algorithm takes in a population and evolves it in place using the varOr() function. It returns the optimized population and a Logbook with the statistics of the evolution. The logbook will contain the generation number, the number of evalutions for each generation and the statistics if a Statistics is given as argument. The cxpb and mutpb arguments are passed to the varOr() function. The pseudocode goes as follow
evaluate(population)
for g in range(ngen):
offspring = varOr(population, toolbox, lambda_, cxpb, mutpb)
evaluate(offspring)
population = select(offspring, mu)
First, the individuals having an invalid fitness are evaluated. Second, the evolutionary loop begins by producing lambda_ offspring from the population, the offspring are generated by the varOr() function. The offspring are then evaluated and the next generation population is selected from only the offspring. Finally, when ngen generations are done, the algorithm returns a tuple with the final population and a Logbook of the evolution.
Note
Care must be taken when the lambda:mu ratio is 1 to 1 as a non-stochastic selection will result in no selection at all as the operator selects lambda individuals from a pool of mu.
This function expects toolbox.mate(), toolbox.mutate(), toolbox.select() and toolbox.evaluate() aliases to be registered in the toolbox. This algorithm uses the varOr() variation.
This is algorithm implements the ask-tell model proposed in [Colette2010], where ask is called generate and tell is called update.
Parameters: |
|
---|---|
Returns: | The final population |
Returns: | A class:~deap.tools.Logbook with the statistics of the evolution |
The algorithm generates the individuals using the toolbox.generate() function and updates the generation method with the toolbox.update() function. It returns the optimized population and a Logbook with the statistics of the evolution. The logbook will contain the generation number, the number of evalutions for each generation and the statistics if a Statistics is given as argument. The pseudocode goes as follow
for g in range(ngen):
population = toolbox.generate()
evaluate(population)
toolbox.update(population)
[Colette2010] | Collette, Y., N. Hansen, G. Pujol, D. Salazar Aponte and R. Le Riche (2010). On Object-Oriented Programming of Optimizers - Examples in Scilab. In P. Breitkopf and R. F. Coelho, eds.: Multidisciplinary Design Optimization in Computational Mechanics, Wiley, pp. 527-565; |
Variations are smaller parts of the algorithms that can be used separately to build more complex algorithms.
Part of an evolutionary algorithm applying only the variation part (crossover and mutation). The modified individuals have their fitness invalidated. The individuals are cloned so returned population is independent of the input population.
Parameters: |
|
---|---|
Returns: | A list of varied individuals that are independent of their parents. |
The variation goes as follow. First, the parental population
is duplicated using the toolbox.clone() method
and the result is put into the offspring population
.
A first loop over
is executed to mate pairs of consecutive
individuals. According to the crossover probability cxpb, the
individuals
and
are mated
using the toolbox.mate() method. The resulting children
and
replace their respective
parents in
. A second loop over the resulting
is executed to mutate every individual with a
probability mutpb. When an individual is mutated it replaces its not
mutated version in
. The resulting
is returned.
This variation is named And beceause of its propention to apply both
crossover and mutation on the individuals. Note that both operators are
not applied systematicaly, the resulting individuals can be generated from
crossover only, mutation only, crossover and mutation, and reproduction
according to the given probabilities. Both probabilities should be in
.
Part of an evolutionary algorithm applying only the variation part (crossover, mutation or reproduction). The modified individuals have their fitness invalidated. The individuals are cloned so returned population is independent of the input population.
Parameters: |
|
---|---|
Returns: | The final population |
Returns: | A class:~deap.tools.Logbook with the statistics of the evolution |
The variation goes as follow. On each of the lambda_ iteration, it
selects one of the three operations; crossover, mutation or reproduction.
In the case of a crossover, two individuals are selected at random from
the parental population , those individuals are cloned
using the toolbox.clone() method and then mated using the
toolbox.mate() method. Only the first child is appended to the
offspring population
, the second child is discarded.
In the case of a mutation, one individual is selected at random from
, it is cloned and then mutated using using the
toolbox.mutate() method. The resulting mutant is appended to
. In the case of a reproduction, one individual is
selected at random from
, cloned and appended to
.
This variation is named Or beceause an offspring will never result from
both operations crossover and mutation. The sum of both probabilities
shall be in , the reproduction probability is
1 - cxpb - mutpb.
A module that provides support for the Covariance Matrix Adaptation Evolution Strategy.
A strategy that will keep track of the basic parameters of the CMA-ES algorithm ([Hansen2001]).
Parameters: |
|
---|
Parameter | Default | Details |
---|---|---|
lambda_ | int(4 + 3 * log(N)) | Number of children to produce at each generation, N is the individual’s size (integer). |
mu | int(lambda_ / 2) | The number of parents to keep from the lambda children (integer). |
cmatrix | identity(N) | The initial covariance matrix of the distribution that will be sampled. |
weights | "superlinear" | Decrease speed, can be "superlinear", "linear" or "equal". |
cs | (mueff + 2) / (N + mueff + 3) | Cumulation constant for step-size. |
damps | 1 + 2 * max(0, sqrt(( mueff - 1) / (N + 1)) - 1) + cs | Damping for step-size. |
ccum | 4 / (N + 4) | Cumulation constant for covariance matrix. |
ccov1 | 2 / ((N + 1.3)^2 + mueff) | Learning rate for rank-one update. |
ccovmu | 2 * (mueff - 2 + 1 / mueff) / ((N + 2)^2 + mueff) | Learning rate for rank-mu update. |
[Hansen2001] | Hansen and Ostermeier, 2001. Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation |
Computes the parameters depending on . It needs to
be called again if
changes during evolution.
Parameters: | params – A dictionary of the manually set parameters. |
---|
A CMA-ES strategy that uses the paradigm ([Igel2007]).
Parameters: |
|
---|
Other parameters can be provided as described in the next table
Parameter | Default | Details |
---|---|---|
d | 1.0 + N / (2.0 * lambda_) | Damping for step-size. |
ptarg | 1.0 / (5 + sqrt(lambda_) / 2.0) | Taget success rate. |
cp | ptarg * lambda_ / (2.0 + ptarg * lambda_) | Step size learning rate. |
cc | 2.0 / (N + 2.0) | Cumulation time horizon. |
ccov | 2.0 / (N**2 + 6.0) | Covariance matrix learning rate. |
pthresh | 0.44 | Threshold success rate. |
[Igel2007] | Igel, Hansen, Roth, 2007. Covariance matrix adaptation for |
multi-objective optimization. Evolutionary Computation Spring;15(1):1-28
Computes the parameters depending on . It needs to
be called again if
changes during evolution.
Parameters: | params – A dictionary of the manually set parameters. |
---|
Multiobjective CMA-ES strategy based on the paper [Voss2010]. It is used similarly as the standard CMA-ES strategy with a generate-update scheme.
Parameters: |
|
---|
Other parameters can be provided as described in the next table
Parameter | Default | Details |
---|---|---|
d | 1.0 + N / 2.0 | Damping for step-size. |
ptarg | 1.0 / (5 + 1.0 / 2.0) | Taget success rate. |
cp | ptarg / (2.0 + ptarg) | Step size learning rate. |
cc | 2.0 / (N + 2.0) | Cumulation time horizon. |
ccov | 2.0 / (N**2 + 6.0) | Covariance matrix learning rate. |
pthresh | 0.44 | Threshold success rate. |
[Voss2010] | Voss, Hansen, Igel, “Improved Step Size Adaptation for the MO-CMA-ES”, 2010. |
Generate a population of individuals of type
ind_init from the current strategy.
Parameters: | ind_init – A function object that is able to initialize an individual from a list. |
---|---|
Returns: | A list of individuals with a private attribute _ps. This last attribute is essential to the update function, it indicates that the individual is an offspring and the index of its parent. |