SlideShare a Scribd company logo
GENETIC AND
EVOLUTIONARY
ALGORITHMS
DPTO DE INGENIERÍA DE SISTEMAS Y AUTOMÁTICA
UC3M
Genetic algorithms
• John Holland, (1975). "Adaptation in natural and
artificial systems.”
• Algorithms that manage populations consisting of
coded solutions of problems.
• The search for good solutions is made in the space of
codified solutions.
• Manipulation of populations: selection, crossing and
mutation.
Features
• They do not work with the objects, but with a coding of
them.
• The AG carry out a search through a whole generation of
objects, they do not look for a single element.
• They use a health function that gives us information on
how adapted they are.
• The transition rules are non-deterministic probabilistic.
Genetic search
Terminology of Genetic Algorithms
Cycle of Genetic Algorithms
• Key idea: Give
preference to the
best individuals,
allowing them to
pass their genes to
the next generation.
Selection operator
The goodness of an individual is calculated with the
fitness function.
Selection methods
Selection example for f(x)=x2
Crossover operator
• Two individuals of the population
are chosen through the selection
operator.
• A crossing place is randomly chosen.
• The values of the two chains are
exchanged at this point.
• By recombining portions of good
individuals, even better individuals
are created.
Types of crossing
Operation of the crossing by a point
• Once the parents are
selected, with a Pc
probability, a crossing point
in the parents' chains is
chosen and the two children
are obtained
Mutation
• Mutation operator:
• With a certain low
probability, a certain
portion of the new
individuals can mutate their
bits.
• Its purpose is to maintain
diversity within the
population and prevent
premature convergence.
• Mutation and selection (no
crossover) create a
maximum slope and noise
tolerant optimization
algorithm.
Dominance
•In nature, most of the species associate a genotype with
a pair of chromosomes, where certain alleles dominate
over others (recessive), so that the phenotype is
determined by the combination of these two
chromosomes and by predominance of alleles .
Domination map
• Hollstein developed a system of trialélico
domination including a third allele to have a
dominant 1 and a recessive 1.
Classical Algorithms Genetic algorithms
They generate a single
point in each iteration.
The sequence of points
approximates the optimal
solution.
GThere will be a
population of points in
each iteration. The best
point of the population
approximates the optimal
solution.
Select the next point in
the sequence for a
deterministic computation.
Select the next population
by means of a computer
that uses a random
number generator.
¿Por qué funcionan los Algoritmos Genéticos?
• Are the AG bits exchanged only?
• What is behind them?
• Holland created a theorem, called
• "Holland's schemes theorem".
• There are some other theorems, some based
in the analysis of Markov chains:
• Is there a chain of different solutions that allows
reaching the optimal solution?
Holland’s Theorem
•Basic principle:
• A scheme represents
several points in space.
• A point is represented by
several schemes.
Operations of the GA and Schemes
• Two definitions:
• Schema order: (1,1,0, *, *, *, 1, *, *) => order 4
• Length of the scheme: (1,1,0, *, *, *, 1, *, *) => length 6
• The order of a scheme is the number of fixed positions
• (the number of zeros and ones).
• The length of the scheme is the distance between the first and
the last specific position of the chain.
Operations of the GA and Schemes
• Selection: good survival for schemes that represent
good individuals.
• Crossing: good survival for short length schemes.
• Mutation: good survival for low order schemes.
Conclusion of the scheme theorem
•Short schemes, low order get
better average.
• The schemes receive an
exponentially increasing number
of individuals
Computational aspects
• A large number of health assessments can be
computationally expensive.
• They are completely parallel by nature.
• There are several good schemes for parallel
computing.
Parallel schemes
Genetic Algorithms with continuous
parameters
• One of the problems with binary coding in genetic
algorithms is that you do not normally take advantage
of all the precision of the computer.
• What can be done if you want to use all the possible
precision?
• The answer is to represent the parameters in floating
point.
• When the variable is continuous, this is the most
natural way to represent the numbers. It also has the
advantage that a smaller memory size is required than
for binary storage.
Genetic Algorithms with continuous
parameters
• Operators do not usually work at the bit level as in the
binary case, but work at the level of the whole floating-
point number:
• Selection: The chromosomes are ordered according to their
health and we are left with the best members of the
population.
• Crossing: In the simplest methods, one or more points are
chosen on the chromosome to mark the crossing points.
Then the parameters between these points are simply
exchanged between the two parents.
Genetic Algorithms with continuous
parameters
• Mutation: With a certain probability, which is usually
between 1% and 20%, the chromosomes that are going to
be mutated are selected.
• Next, the parameters of the chromosome that are to be
mutated are randomly selected.
• Finally, each parameter to be mutated is replaced by
another new random parameter or another new random
parameter is added.
Some Genetic Algorithm
Terminology
• Fitness Functions
• The fitness function is the function you want to
optimize. For standard optimization algorithms, this is
known as the objective function.
• The toolbox tries to find the minimum of the fitness
function. You can write the fitness function as an M-file
and pass it as a function handle input argument to the
main genetic algorithm function.
Some Genetic Algorithm
Terminology
• Individuals
• An individual is any point to which you can apply the
fitness function. The value of the fitness function for an
individual is its score.
• For example, if the fitness function is the vector (2, 3, 1),
whose length is the number of variables in the problem,
is an individual. The score of the individual (2, 3, 1) is f(2,
-3, 1) = 51. An individual is sometimes referred to as a
genome and the vector entries of an individual as genes.
Some Genetic Algorithm
Terminology
• Populations and Generations
• A population is an array of individuals. For example, if the size of
the population is 100 and the number of variables in the fitness
function is 3, you represent the population by a 100-by-3 matrix.
• The same individual can appear more than once in the
population. For example, the individual (2, 3, 1) can appear in
more than one row of the array.
• At each iteration, the genetic algorithm performs a series of
computations on the current population to produce a new
population. Each successive population is called a new generation.
Some Genetic Algorithm
Terminology
• Diversity
• Diversity refers to the average distance between individuals in a
population. A population has high diversity if the average distance
is large; otherwise it has low diversity. In the figure, the population
on the left has high diversity, while the population on the right has
low diversity.
• Diversity is essential to the genetic algorithm because it enables
the algorithm to search a larger region of the space.
Some Genetic Algorithm
Terminology
• Fitness Values and Best Fitness Values
• The fitness value of an individual is the value of the
fitness function for that individual.
• Because the toolbox finds the minimum of the
fitness function, the best fitness value for a
population is the smallest fitness value for any
individual in the population.
Some Genetic Algorithm
Terminology
• Parents and Children
• To create the next generation, the genetic
algorithm selects certain individuals in the current
population, called parents, and uses them to create
individuals in the next generation, called children.
• Typically, the algorithm is more likely to select
parents that have better fitness values.
80
GENETIC ALGORITHM
81
What is a Genetic Algorithm?
 Uses concepts from evolutionary
biology
 Start with an initial generation of
candidate solutions that are tested
against the objective function
 Subsequent generations evolve
from the 1st through selection,
crossover and mutation
82
How Evolution Works – Binary Case
 Selection
– Retain the best performing bit strings from one generation to the next. Favor these for
reproduction
– parent1 = [ 1 0 1 0 0 1 1 0 0 0 ]
– parent2 = [ 1 0 0 1 0 0 1 0 1 0 ]
 Crossover
– parent1 = [ 1 0 1 0 0 1 1 0 0 0 ]
– parent2 = [ 1 0 0 1 0 0 1 0 1 0 ]
– child = [ 1 0 0 0 0 1 1 0 1 0 ]
 Mutation
– parent = [ 1 0 1 0 0 1 1 0 0 0 ]
– child = [ 0 1 0 1 0 1 0 0 0 1 ]
83
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Genetic Algorithm – Iteration 1
Evaluate initial population
x
y
84
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Genetic Algorithm – Iteration 1
Select a few good solutions for reproduction
x
y
85
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Genetic Algorithm – Iteration 2
Generate new population and evaluate
x
y
86
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Genetic Algorithm – Iteration 2
x
y
87
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Genetic Algorithm – Iteration 3
x
y
88
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Genetic Algorithm – Iteration 3
x
y
89
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
Genetic Algorithm – Iteration N
Continue process until stopping criteria are met
x
y
Solution found
90
Genetic Algorithm – Peaks Function
CI_L02_Optimization_ag2_eng.pdf
Randomly Initialized
vectors
Vectors
Most vectors are
near global optima
DE is an Evolutionary Algorithm.
A stochastic population-based algorithm for continuous function optimization (by Storn
and Price, 1995)
This class also includes Genetic Algorithms, Evolutionary Strategies and Evolutionary
Programming Developed to optimize real parameter, real valued functions.
 Global optimization is necessary in fields such as engineering, statistics and finance.
Many practical problems have objective functions that are non-differentiable, non-
continuous, non-linear, noisy, flat, multi-dimensional or have many local minima,
constraints or stochasticity
 Such problems are difficult if not impossible to solve analytically
 DE can be used to find approximate solutions to such problems
MAX
𝑋
𝑀𝐼𝑁
Solutions are represented as vectors of size D with each value taken from some domain.
We will maintain a population of size NP
INITIALITATION
MUTATION
RECOMBINATION/CROSSOVER
SELECTION
𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡 𝑣𝑎𝑙𝑢𝑒𝑠 𝑜𝑓 𝑟𝑎𝑛𝑑𝑖,𝑗 0,1 𝑎𝑟𝑒 𝑖𝑛𝑡𝑖𝑎𝑙𝑖𝑠𝑒𝑑 𝑓𝑜𝑟 𝑒𝑎𝑐ℎ 𝑖 𝑎𝑛𝑑 𝑗
INITIALISATION
INITIALISATION
INITIALISATION
MUTATION
RECOMBINATION/CROSSOVER
SELECTION
 It is Recombination of Vector Differentials to generate mutant vector
 This explores the search space
 𝑋′𝑖
(𝐺)
= 𝑋𝑎
(𝐺)
+ 𝐹 𝑋𝑏
𝐺
− 𝑋𝑐
𝐺
Here 𝑎, 𝑏, 𝑐 is randomly chosen vector different from 𝑖
This mutant vector is constructed through a specific mutation operation based on adding
differences between randomly selected
elements of the population to another element.
(JOURNAL OF GLOBAL OPTIMISATION BY RAINER STORN AND KENNETH PRICE)
CI_L02_Optimization_ag2_eng.pdf
 DE/rand/1/bin
 DE/best/2/bin
 DE/best/1/exp
 DE/current-to-rand/1/exp
Step-I Step-II
Step-III Step-IV
Step-V
INTIALISTATION
MUTATION
RECOMBINATION/CROSSOVER
SELECTION
Crossover is a genetic operator used to vary the programming of a chromosome
or chromosomes from one generation to the next. It is analogous to reproduction , upon
which genetic algorithms are based.
Crossover operator combines components from the current element and from the
mutant vector, according to a control parameter CR ∈ [0, 1].
 It exploits the solution space.
CI_L02_Optimization_ag2_eng.pdf
Trial vector
Target
Mutant
Crossover parameter
NON CONSECUTIVE BINOMIAL CROSSOVER.
CONSECUTIVE EXPONENTIAL CROSSOVER.
CONSECUTIVE BINOMIAL CROSSOVER.
NON CONSECUTIVE EXPONENTIAL CROSSOVER.
 If the random vector/offspring replicates 𝒗𝒊,𝒏,a randomly chosen parameter of 𝒙𝒊,𝒏, 𝑥𝑖,𝑟,𝑛
will replace the corresponding parameter of the child 𝒄𝒊,𝒏 𝑐𝑖,𝑗,𝑛 .
On the other hand, if 𝒄𝒊,𝒏 inherits no parameter from 𝑣𝑖,𝑛 and hence no evolution happens,
a randomly chosen parameter of the child 𝒄𝒊,𝒏, 𝑐𝑖,𝑗,𝑛,will be replaced by the corresponding
parameter of the mutant 𝒗𝒊,𝒏, 𝑣𝑖,𝑗,𝑛
𝑐𝑖,𝑗,𝑛 = ቊ
𝑣𝑖,𝑗,𝑛
𝑥𝑖,𝑗,𝑛
𝑅𝑎𝑛𝑑(0,1) ≤ 𝐶𝑅
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Non-consecutive binomial crossover
𝑥𝑖,𝑛
𝑣𝑖,𝑛
𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑖
𝐸𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡
𝑐𝑖,𝑛
In this scheme, an integer 𝑟 is first randomly chosen from [1, 𝑁]. It is the starting point for
exponential crossover.
 𝑐𝑖,𝑟,𝑛 of the offspring 𝒄𝒊,𝒏 is taken from 𝑣𝑖,𝑟,𝑛 of the mutant 𝑣𝑖,𝑛.
 Parameters of the offspring after (in cyclic sense) r depends on a series of
Bernoulli experiments of probability 𝐶𝑟.
The mutant will keep donating its parameters to the offspring until the Bernoulli
experiment is unsuccessful for the first time or the crossover length is already N − 1. The
remaining parameters of the child come from 𝑥𝑖,𝑛
Consecutive exponential crossover
𝑆𝑡𝑎𝑟𝑡𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡
𝑥𝑖,𝑛
𝑣𝑖,𝑛
𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑖
𝐸𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡
𝑐𝑖,𝑛
The number of successful Bernoulli experiment is the crossover length L.
A staring point r is then randomly chosen between 1 and N.
𝑐𝑖,𝑛inherits L parameters of mutant 𝑣𝑖,𝑛consecutively (in cyclic sense) from the starting
point r (including).
The remaining parameters of 𝑐𝑖,𝑛 come from 𝑥𝑖,𝑛.
Consecutive binomial crossover
𝑆𝑡𝑎𝑟𝑡𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡
𝑥𝑖,𝑛
𝑣𝑖,𝑛
𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑖
𝐸𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡
𝑐𝑖,𝑛
A series of Bernoulli experiments of probability 𝐶𝑟 are carried out in the same way as
described in consecutive exponential crossover.
Used to determine the crossover length L for non-consecutive exponential crossover.
L parameters are randomly chosen from 𝑣𝑖,𝑛 and inherited by 𝑐𝑖,𝑛. 𝑥𝑖,𝑛 donates the
remaining parameters to 𝑐𝑖,𝑛
Non-Consecutive exponential crossover
𝑥𝑖,𝑛
𝑣𝑖,𝑛
𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑖
𝐸𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡
𝑐𝑖,𝑛
INITIALISATION
MUTATION
RECOMBINATION/CROSSOVER
SELECTION
 “Survival of the fittest” principle applied in selection
 The trial offspring vector is compared with the target vector and that on
with a better fitness is admitted to the next generation.
SELECTION
SELECTION
Consider the two dimensional function
𝒇 𝒙, 𝒚 = 𝒙𝟐
+ 𝒚𝟐
Lets start with 5 candidate solutions randomly initiated in range (-10,10)
• 𝑋1,0=[2,-1]
• 𝑋2,0=[6,1]
• 𝑋3,0=[-3,5]
• 𝑋4,0=[-2,6]
• 𝑋5,0=[6,-7]
For the first vector 𝑿𝟏,randomly select three other vectors say (randomly) 𝑿𝟐, 𝑿𝟒 and 𝑿𝟓
The mutant vector is formed as 𝑽𝟏,𝟎=𝑿𝟐,𝟎+𝑭.(𝑿𝟒,𝟎-𝑿𝟓,𝟎).
The trial offspring vector 𝑈1,0is formed by exchanging components of 𝑉1,0 with the target vector
𝑋1,0.
If we set Cr=0.9
Let rand(0,1)=0.6,since 0.6<0.9, 𝑈1,1,0=𝑉1,1,0 =6+0.8.(-8)=-0.4
Again let rand(0,1)=0.95>0.9
Hence 𝑈1,0=𝑋1,2,0 = −1
The trial(offspring ) is 𝑼𝟏,𝟎 =
Finally, Fitness of Target(parent)=𝑓 2, −1 = 5
Fitness of Trial(offspring)=𝑓 −0.4, −1 = 1.16
CI_L02_Optimization_ag2_eng.pdf
We discuss a simple numerical example to illustrate the DE algorithm
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑥 = 𝑥1 + 𝑥2 + 𝑥3
Individual 1 Individual 2 Individual 3 Individual 4 Individual 5 Individual 6
𝑥1 0.68 0.92 0.22 0.12 0.40 0.94
𝑥2 0.89 0.92 0.14 0.09 0.81 0.63
𝑥3 0.04 0.33 0.40 0.05 0.83 0.13
𝑓(𝑥) 1.61 2.17 0.76 0.26 2.04 1.70
Individual
2
Individual
4
Difference
vector
F=0.80 Weighted
D.V
𝑥1 0.92 - 0.12 = 0.80 *0.80 0.64
𝑥2 0.92 - 0.09 = 0.83 *0.80 0.66
𝑥3 0.33 - 0.05 = 0.28 *0.80 0.22
Here Individual 2 and 4 are randomly chosen
Weighted D.V Individual 6 Mutant Vector
𝑥1 0.64 0.94 1.58
𝑥2 0.66 0.63 1.29
𝑥3 0.22 0.13 0.35
Target Vector Mutant vector Trial vector
𝑥1 0.68 1.58 1.58
𝑥2 0.89 1.29 0.89
𝑥3 0.04 0.35 0.04
𝑓(𝑥) 1.61 3.22 2.51
Here CR=0.50
NP = 5 or 10 times of number of parameter in a vector
 If solutions get stuck take F = 0.5 and then increase F or NP
 F∈ [0.4, 1] is very effective range
 CR = 0.9 or 1 for a quick solution
SUMMARY OF THE ALGORITHM(Taken from:Water Resources Research Report by Vasan Arunachalam)
[1] R. Storn and K. Price, “Differential evolution – a simple and efficient heuristic for global
optimization over continuous spaces,” J. Glob. Optim., vol. 11, pp. 341–359, 1997.
[2] Swagatam Das1 and P. N. Suganthan2, Differential Evolution:Foundations, Perspectives, and
Applications, SSCI (2011)
[3] Chuan Lin · Anyong Qing · Quanyuan Feng, A comparative study of crossover in differential
evolution, pp. :675–703(2011)
[4] Zaharie, D.: A comparative analysis of crossover algorithms in differential evolution. Proc. Of
2007, pp. 171–181 (2007)
[5] http://www1.icsi.berkeley.edu/~storn/code.html
CI_L02_Optimization_ag2_eng.pdf

More Related Content

PPTX
Evolutionary computing - soft computing
PPTX
CI_L11_Optimization_ag2_eng.pptx
PPTX
PPTX
MACHINE LEARNING - GENETIC ALGORITHM
PPTX
Genetic algorithm optimization technique.pptx
PPTX
Genetic Algorithms : A class of Evolutionary Algorithms
PPTX
Genetic algorithm
PPT
Genetic algorithm
Evolutionary computing - soft computing
CI_L11_Optimization_ag2_eng.pptx
MACHINE LEARNING - GENETIC ALGORITHM
Genetic algorithm optimization technique.pptx
Genetic Algorithms : A class of Evolutionary Algorithms
Genetic algorithm
Genetic algorithm

Similar to CI_L02_Optimization_ag2_eng.pdf (20)

PDF
CSA 3702 machine learning module 4
DOCX
introduction to machine learning unit iV
PPTX
Genetic algorithm
PPTX
Genetic algorithms
PPTX
GA of a Paper 2012.pptx
PPTX
Genetic Algorithm
PDF
Genetic Algorithms in Artificial Intelligence
PPTX
Genetic algorithm raktim
PPTX
Genetic algorithm_raktim_IITKGP
PDF
Parallel evolutionary approach paper
PPTX
2020 6 16_ga_introduction
PPTX
PDF
A Review On Genetic Algorithm And Its Applications
PPTX
introduction of genetic algorithm
PDF
RM 701 Genetic Algorithm and Fuzzy Logic lecture
PPT
Genetic-Algorithms.ppt
PPT
Genetic-Algorithms.ppt
PPT
Genetic-Algorithms for machine learning and ai.ppt
PPT
AI_PPT_Genetic-Algorithms.ppt
PPT
Genetic-Algorithms forv artificial .ppt
CSA 3702 machine learning module 4
introduction to machine learning unit iV
Genetic algorithm
Genetic algorithms
GA of a Paper 2012.pptx
Genetic Algorithm
Genetic Algorithms in Artificial Intelligence
Genetic algorithm raktim
Genetic algorithm_raktim_IITKGP
Parallel evolutionary approach paper
2020 6 16_ga_introduction
A Review On Genetic Algorithm And Its Applications
introduction of genetic algorithm
RM 701 Genetic Algorithm and Fuzzy Logic lecture
Genetic-Algorithms.ppt
Genetic-Algorithms.ppt
Genetic-Algorithms for machine learning and ai.ppt
AI_PPT_Genetic-Algorithms.ppt
Genetic-Algorithms forv artificial .ppt
Ad

More from SantiagoGarridoBulln (16)

PDF
Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.
PDF
Optimum Engineering Design - Day 2b. Classical Optimization methods
PDF
Optimum engineering design - Day 6. Classical optimization methods
PDF
Optimum engineering design - Day 5. Clasical optimization methods
PDF
Optimum Engineering Design - Day 4 - Clasical methods of optimization
PDF
OptimumEngineeringDesign-Day2a.pdf
PDF
OptimumEngineeringDesign-Day-1.pdf
PDF
CI_L01_Optimization.pdf
PDF
Lecture_Slides_Mathematics_06_Optimization.pdf
PDF
OptimumEngineeringDesign-Day7.pdf
PDF
CI L11 Optimization 3 GlobalOptimization.pdf
PDF
optmizationtechniques.pdf
PDF
complete-manual-of-multivariable-optimization.pdf
PDF
slides-linear-programming-introduction.pdf
PDF
bv_cvxslides (1).pdf
PDF
Optim_methods.pdf
Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.
Optimum Engineering Design - Day 2b. Classical Optimization methods
Optimum engineering design - Day 6. Classical optimization methods
Optimum engineering design - Day 5. Clasical optimization methods
Optimum Engineering Design - Day 4 - Clasical methods of optimization
OptimumEngineeringDesign-Day2a.pdf
OptimumEngineeringDesign-Day-1.pdf
CI_L01_Optimization.pdf
Lecture_Slides_Mathematics_06_Optimization.pdf
OptimumEngineeringDesign-Day7.pdf
CI L11 Optimization 3 GlobalOptimization.pdf
optmizationtechniques.pdf
complete-manual-of-multivariable-optimization.pdf
slides-linear-programming-introduction.pdf
bv_cvxslides (1).pdf
Optim_methods.pdf
Ad

Recently uploaded (20)

PPTX
Artificial Intelligence
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Geodesy 1.pptx...............................................
PPTX
UNIT 4 Total Quality Management .pptx
PPT
Project quality management in manufacturing
PPTX
Current and future trends in Computer Vision.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Sustainable Sites - Green Building Construction
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PPTX
Safety Seminar civil to be ensured for safe working.
PPT
Total quality management ppt for engineering students
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PPTX
additive manufacturing of ss316l using mig welding
PPTX
Construction Project Organization Group 2.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Artificial Intelligence
Fundamentals of safety and accident prevention -final (1).pptx
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Geodesy 1.pptx...............................................
UNIT 4 Total Quality Management .pptx
Project quality management in manufacturing
Current and future trends in Computer Vision.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Automation-in-Manufacturing-Chapter-Introduction.pdf
Sustainable Sites - Green Building Construction
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Safety Seminar civil to be ensured for safe working.
Total quality management ppt for engineering students
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Foundation to blockchain - A guide to Blockchain Tech
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
additive manufacturing of ss316l using mig welding
Construction Project Organization Group 2.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...

CI_L02_Optimization_ag2_eng.pdf

  • 1. GENETIC AND EVOLUTIONARY ALGORITHMS DPTO DE INGENIERÍA DE SISTEMAS Y AUTOMÁTICA UC3M
  • 2. Genetic algorithms • John Holland, (1975). "Adaptation in natural and artificial systems.” • Algorithms that manage populations consisting of coded solutions of problems. • The search for good solutions is made in the space of codified solutions. • Manipulation of populations: selection, crossing and mutation.
  • 3. Features • They do not work with the objects, but with a coding of them. • The AG carry out a search through a whole generation of objects, they do not look for a single element. • They use a health function that gives us information on how adapted they are. • The transition rules are non-deterministic probabilistic.
  • 6. Cycle of Genetic Algorithms
  • 7. • Key idea: Give preference to the best individuals, allowing them to pass their genes to the next generation. Selection operator The goodness of an individual is calculated with the fitness function.
  • 10. Crossover operator • Two individuals of the population are chosen through the selection operator. • A crossing place is randomly chosen. • The values of the two chains are exchanged at this point. • By recombining portions of good individuals, even better individuals are created.
  • 12. Operation of the crossing by a point • Once the parents are selected, with a Pc probability, a crossing point in the parents' chains is chosen and the two children are obtained
  • 13. Mutation • Mutation operator: • With a certain low probability, a certain portion of the new individuals can mutate their bits. • Its purpose is to maintain diversity within the population and prevent premature convergence. • Mutation and selection (no crossover) create a maximum slope and noise tolerant optimization algorithm.
  • 14. Dominance •In nature, most of the species associate a genotype with a pair of chromosomes, where certain alleles dominate over others (recessive), so that the phenotype is determined by the combination of these two chromosomes and by predominance of alleles .
  • 15. Domination map • Hollstein developed a system of trialélico domination including a third allele to have a dominant 1 and a recessive 1.
  • 16. Classical Algorithms Genetic algorithms They generate a single point in each iteration. The sequence of points approximates the optimal solution. GThere will be a population of points in each iteration. The best point of the population approximates the optimal solution. Select the next point in the sequence for a deterministic computation. Select the next population by means of a computer that uses a random number generator.
  • 17. ¿Por qué funcionan los Algoritmos Genéticos? • Are the AG bits exchanged only? • What is behind them? • Holland created a theorem, called • "Holland's schemes theorem". • There are some other theorems, some based in the analysis of Markov chains: • Is there a chain of different solutions that allows reaching the optimal solution?
  • 18. Holland’s Theorem •Basic principle: • A scheme represents several points in space. • A point is represented by several schemes.
  • 19. Operations of the GA and Schemes • Two definitions: • Schema order: (1,1,0, *, *, *, 1, *, *) => order 4 • Length of the scheme: (1,1,0, *, *, *, 1, *, *) => length 6 • The order of a scheme is the number of fixed positions • (the number of zeros and ones). • The length of the scheme is the distance between the first and the last specific position of the chain.
  • 20. Operations of the GA and Schemes • Selection: good survival for schemes that represent good individuals. • Crossing: good survival for short length schemes. • Mutation: good survival for low order schemes.
  • 21. Conclusion of the scheme theorem •Short schemes, low order get better average. • The schemes receive an exponentially increasing number of individuals
  • 22. Computational aspects • A large number of health assessments can be computationally expensive. • They are completely parallel by nature. • There are several good schemes for parallel computing.
  • 24. Genetic Algorithms with continuous parameters • One of the problems with binary coding in genetic algorithms is that you do not normally take advantage of all the precision of the computer. • What can be done if you want to use all the possible precision? • The answer is to represent the parameters in floating point. • When the variable is continuous, this is the most natural way to represent the numbers. It also has the advantage that a smaller memory size is required than for binary storage.
  • 25. Genetic Algorithms with continuous parameters • Operators do not usually work at the bit level as in the binary case, but work at the level of the whole floating- point number: • Selection: The chromosomes are ordered according to their health and we are left with the best members of the population. • Crossing: In the simplest methods, one or more points are chosen on the chromosome to mark the crossing points. Then the parameters between these points are simply exchanged between the two parents.
  • 26. Genetic Algorithms with continuous parameters • Mutation: With a certain probability, which is usually between 1% and 20%, the chromosomes that are going to be mutated are selected. • Next, the parameters of the chromosome that are to be mutated are randomly selected. • Finally, each parameter to be mutated is replaced by another new random parameter or another new random parameter is added.
  • 27. Some Genetic Algorithm Terminology • Fitness Functions • The fitness function is the function you want to optimize. For standard optimization algorithms, this is known as the objective function. • The toolbox tries to find the minimum of the fitness function. You can write the fitness function as an M-file and pass it as a function handle input argument to the main genetic algorithm function.
  • 28. Some Genetic Algorithm Terminology • Individuals • An individual is any point to which you can apply the fitness function. The value of the fitness function for an individual is its score. • For example, if the fitness function is the vector (2, 3, 1), whose length is the number of variables in the problem, is an individual. The score of the individual (2, 3, 1) is f(2, -3, 1) = 51. An individual is sometimes referred to as a genome and the vector entries of an individual as genes.
  • 29. Some Genetic Algorithm Terminology • Populations and Generations • A population is an array of individuals. For example, if the size of the population is 100 and the number of variables in the fitness function is 3, you represent the population by a 100-by-3 matrix. • The same individual can appear more than once in the population. For example, the individual (2, 3, 1) can appear in more than one row of the array. • At each iteration, the genetic algorithm performs a series of computations on the current population to produce a new population. Each successive population is called a new generation.
  • 30. Some Genetic Algorithm Terminology • Diversity • Diversity refers to the average distance between individuals in a population. A population has high diversity if the average distance is large; otherwise it has low diversity. In the figure, the population on the left has high diversity, while the population on the right has low diversity. • Diversity is essential to the genetic algorithm because it enables the algorithm to search a larger region of the space.
  • 31. Some Genetic Algorithm Terminology • Fitness Values and Best Fitness Values • The fitness value of an individual is the value of the fitness function for that individual. • Because the toolbox finds the minimum of the fitness function, the best fitness value for a population is the smallest fitness value for any individual in the population.
  • 32. Some Genetic Algorithm Terminology • Parents and Children • To create the next generation, the genetic algorithm selects certain individuals in the current population, called parents, and uses them to create individuals in the next generation, called children. • Typically, the algorithm is more likely to select parents that have better fitness values.
  • 34. 81 What is a Genetic Algorithm?  Uses concepts from evolutionary biology  Start with an initial generation of candidate solutions that are tested against the objective function  Subsequent generations evolve from the 1st through selection, crossover and mutation
  • 35. 82 How Evolution Works – Binary Case  Selection – Retain the best performing bit strings from one generation to the next. Favor these for reproduction – parent1 = [ 1 0 1 0 0 1 1 0 0 0 ] – parent2 = [ 1 0 0 1 0 0 1 0 1 0 ]  Crossover – parent1 = [ 1 0 1 0 0 1 1 0 0 0 ] – parent2 = [ 1 0 0 1 0 0 1 0 1 0 ] – child = [ 1 0 0 0 0 1 1 0 1 0 ]  Mutation – parent = [ 1 0 1 0 0 1 1 0 0 0 ] – child = [ 0 1 0 1 0 1 0 0 0 1 ]
  • 36. 83 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Genetic Algorithm – Iteration 1 Evaluate initial population x y
  • 37. 84 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Genetic Algorithm – Iteration 1 Select a few good solutions for reproduction x y
  • 38. 85 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Genetic Algorithm – Iteration 2 Generate new population and evaluate x y
  • 39. 86 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Genetic Algorithm – Iteration 2 x y
  • 40. 87 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Genetic Algorithm – Iteration 3 x y
  • 41. 88 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Genetic Algorithm – Iteration 3 x y
  • 42. 89 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Genetic Algorithm – Iteration N Continue process until stopping criteria are met x y Solution found
  • 43. 90 Genetic Algorithm – Peaks Function
  • 46. Most vectors are near global optima
  • 47. DE is an Evolutionary Algorithm. A stochastic population-based algorithm for continuous function optimization (by Storn and Price, 1995) This class also includes Genetic Algorithms, Evolutionary Strategies and Evolutionary Programming Developed to optimize real parameter, real valued functions.
  • 48.  Global optimization is necessary in fields such as engineering, statistics and finance. Many practical problems have objective functions that are non-differentiable, non- continuous, non-linear, noisy, flat, multi-dimensional or have many local minima, constraints or stochasticity  Such problems are difficult if not impossible to solve analytically  DE can be used to find approximate solutions to such problems
  • 49. MAX 𝑋 𝑀𝐼𝑁 Solutions are represented as vectors of size D with each value taken from some domain.
  • 50. We will maintain a population of size NP
  • 52. 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡 𝑣𝑎𝑙𝑢𝑒𝑠 𝑜𝑓 𝑟𝑎𝑛𝑑𝑖,𝑗 0,1 𝑎𝑟𝑒 𝑖𝑛𝑡𝑖𝑎𝑙𝑖𝑠𝑒𝑑 𝑓𝑜𝑟 𝑒𝑎𝑐ℎ 𝑖 𝑎𝑛𝑑 𝑗 INITIALISATION
  • 55.  It is Recombination of Vector Differentials to generate mutant vector  This explores the search space  𝑋′𝑖 (𝐺) = 𝑋𝑎 (𝐺) + 𝐹 𝑋𝑏 𝐺 − 𝑋𝑐 𝐺 Here 𝑎, 𝑏, 𝑐 is randomly chosen vector different from 𝑖 This mutant vector is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element.
  • 56. (JOURNAL OF GLOBAL OPTIMISATION BY RAINER STORN AND KENNETH PRICE)
  • 58.  DE/rand/1/bin  DE/best/2/bin  DE/best/1/exp  DE/current-to-rand/1/exp
  • 63. Crossover is a genetic operator used to vary the programming of a chromosome or chromosomes from one generation to the next. It is analogous to reproduction , upon which genetic algorithms are based. Crossover operator combines components from the current element and from the mutant vector, according to a control parameter CR ∈ [0, 1].  It exploits the solution space.
  • 66. NON CONSECUTIVE BINOMIAL CROSSOVER. CONSECUTIVE EXPONENTIAL CROSSOVER. CONSECUTIVE BINOMIAL CROSSOVER. NON CONSECUTIVE EXPONENTIAL CROSSOVER.
  • 67.  If the random vector/offspring replicates 𝒗𝒊,𝒏,a randomly chosen parameter of 𝒙𝒊,𝒏, 𝑥𝑖,𝑟,𝑛 will replace the corresponding parameter of the child 𝒄𝒊,𝒏 𝑐𝑖,𝑗,𝑛 . On the other hand, if 𝒄𝒊,𝒏 inherits no parameter from 𝑣𝑖,𝑛 and hence no evolution happens, a randomly chosen parameter of the child 𝒄𝒊,𝒏, 𝑐𝑖,𝑗,𝑛,will be replaced by the corresponding parameter of the mutant 𝒗𝒊,𝒏, 𝑣𝑖,𝑗,𝑛 𝑐𝑖,𝑗,𝑛 = ቊ 𝑣𝑖,𝑗,𝑛 𝑥𝑖,𝑗,𝑛 𝑅𝑎𝑛𝑑(0,1) ≤ 𝐶𝑅 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
  • 69. In this scheme, an integer 𝑟 is first randomly chosen from [1, 𝑁]. It is the starting point for exponential crossover.  𝑐𝑖,𝑟,𝑛 of the offspring 𝒄𝒊,𝒏 is taken from 𝑣𝑖,𝑟,𝑛 of the mutant 𝑣𝑖,𝑛.  Parameters of the offspring after (in cyclic sense) r depends on a series of Bernoulli experiments of probability 𝐶𝑟. The mutant will keep donating its parameters to the offspring until the Bernoulli experiment is unsuccessful for the first time or the crossover length is already N − 1. The remaining parameters of the child come from 𝑥𝑖,𝑛
  • 70. Consecutive exponential crossover 𝑆𝑡𝑎𝑟𝑡𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡 𝑥𝑖,𝑛 𝑣𝑖,𝑛 𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑖 𝐸𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡 𝑐𝑖,𝑛
  • 71. The number of successful Bernoulli experiment is the crossover length L. A staring point r is then randomly chosen between 1 and N. 𝑐𝑖,𝑛inherits L parameters of mutant 𝑣𝑖,𝑛consecutively (in cyclic sense) from the starting point r (including). The remaining parameters of 𝑐𝑖,𝑛 come from 𝑥𝑖,𝑛.
  • 72. Consecutive binomial crossover 𝑆𝑡𝑎𝑟𝑡𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡 𝑥𝑖,𝑛 𝑣𝑖,𝑛 𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑖 𝐸𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡 𝑐𝑖,𝑛
  • 73. A series of Bernoulli experiments of probability 𝐶𝑟 are carried out in the same way as described in consecutive exponential crossover. Used to determine the crossover length L for non-consecutive exponential crossover. L parameters are randomly chosen from 𝑣𝑖,𝑛 and inherited by 𝑐𝑖,𝑛. 𝑥𝑖,𝑛 donates the remaining parameters to 𝑐𝑖,𝑛
  • 76.  “Survival of the fittest” principle applied in selection  The trial offspring vector is compared with the target vector and that on with a better fitness is admitted to the next generation.
  • 79. Consider the two dimensional function 𝒇 𝒙, 𝒚 = 𝒙𝟐 + 𝒚𝟐 Lets start with 5 candidate solutions randomly initiated in range (-10,10) • 𝑋1,0=[2,-1] • 𝑋2,0=[6,1] • 𝑋3,0=[-3,5] • 𝑋4,0=[-2,6] • 𝑋5,0=[6,-7] For the first vector 𝑿𝟏,randomly select three other vectors say (randomly) 𝑿𝟐, 𝑿𝟒 and 𝑿𝟓
  • 80. The mutant vector is formed as 𝑽𝟏,𝟎=𝑿𝟐,𝟎+𝑭.(𝑿𝟒,𝟎-𝑿𝟓,𝟎). The trial offspring vector 𝑈1,0is formed by exchanging components of 𝑉1,0 with the target vector 𝑋1,0. If we set Cr=0.9 Let rand(0,1)=0.6,since 0.6<0.9, 𝑈1,1,0=𝑉1,1,0 =6+0.8.(-8)=-0.4 Again let rand(0,1)=0.95>0.9 Hence 𝑈1,0=𝑋1,2,0 = −1 The trial(offspring ) is 𝑼𝟏,𝟎 = Finally, Fitness of Target(parent)=𝑓 2, −1 = 5 Fitness of Trial(offspring)=𝑓 −0.4, −1 = 1.16
  • 82. We discuss a simple numerical example to illustrate the DE algorithm 𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑥 = 𝑥1 + 𝑥2 + 𝑥3
  • 83. Individual 1 Individual 2 Individual 3 Individual 4 Individual 5 Individual 6 𝑥1 0.68 0.92 0.22 0.12 0.40 0.94 𝑥2 0.89 0.92 0.14 0.09 0.81 0.63 𝑥3 0.04 0.33 0.40 0.05 0.83 0.13 𝑓(𝑥) 1.61 2.17 0.76 0.26 2.04 1.70
  • 84. Individual 2 Individual 4 Difference vector F=0.80 Weighted D.V 𝑥1 0.92 - 0.12 = 0.80 *0.80 0.64 𝑥2 0.92 - 0.09 = 0.83 *0.80 0.66 𝑥3 0.33 - 0.05 = 0.28 *0.80 0.22 Here Individual 2 and 4 are randomly chosen
  • 85. Weighted D.V Individual 6 Mutant Vector 𝑥1 0.64 0.94 1.58 𝑥2 0.66 0.63 1.29 𝑥3 0.22 0.13 0.35
  • 86. Target Vector Mutant vector Trial vector 𝑥1 0.68 1.58 1.58 𝑥2 0.89 1.29 0.89 𝑥3 0.04 0.35 0.04 𝑓(𝑥) 1.61 3.22 2.51 Here CR=0.50
  • 87. NP = 5 or 10 times of number of parameter in a vector  If solutions get stuck take F = 0.5 and then increase F or NP  F∈ [0.4, 1] is very effective range  CR = 0.9 or 1 for a quick solution
  • 88. SUMMARY OF THE ALGORITHM(Taken from:Water Resources Research Report by Vasan Arunachalam)
  • 89. [1] R. Storn and K. Price, “Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces,” J. Glob. Optim., vol. 11, pp. 341–359, 1997. [2] Swagatam Das1 and P. N. Suganthan2, Differential Evolution:Foundations, Perspectives, and Applications, SSCI (2011) [3] Chuan Lin · Anyong Qing · Quanyuan Feng, A comparative study of crossover in differential evolution, pp. :675–703(2011) [4] Zaharie, D.: A comparative analysis of crossover algorithms in differential evolution. Proc. Of 2007, pp. 171–181 (2007) [5] http://www1.icsi.berkeley.edu/~storn/code.html