SlideShare a Scribd company logo
Module 3
Genetic Algorithm
International conference paper on genetic algorithms
1.Evolutionary Simulation-Based Validation
(F. Corno, M. Sonza Reorda, G. Squillero International Journal on Articial
Intelligence Tools (IJAIT), Vol. 14, 1-2, Dec. 2004)
Scope:
This paper describes evolutionary simulation-based validation, a new point
in the spectrum of design validation techniques, besides pseudo-random simu-
lation, designer-generated patterns and formal verication. The proposed ap-
proach is based on coupling an evolutionary algorithm with a hardware simu-
lator, and it is able to t painlessly in an existing industrial ow. Prototypical
tools were used to validate gate-level designs, comparing them against both their
RT-level specications and dierent gate-level implementations. Experimental
results show that the proposed method is eectively able to deal with realistic
designs, discovering potential problems, and, although approximate in nature,
it is able to provide a high degree of condence in the results and it exhibits a
natural robustness even when used starting from incomplete information
2.Exploiting Symbolic Techniques within Genetic Algorithms for Power Op-
timization
(S. Chiusano, F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda IC-
TAI97: 9th IEEE International Conference on Tools with Articial Intelligence,
Newport Beach, CA (USA), November 1997 )CV. Ramamoorthy Best Paper
Award
Scope:
This paper proposes an optimization algorithm for reducing the power dis-
sipation in a sequential circuit. The encoding of the dierent states in a Finite
State Machine is modied to obtain a functionally equivalent circuit that ex-
hibits a reduced power dissipation. The algorithm is based on a newly proposed
power estimation function, that is able to quickly give an accurate estimate of
the dissipated power without actually synthesizing the circuit. Given this esti-
mate, a Genetic Algorithm provides a state re-encoding for the circuit. The esti-
mation function is computed in a very ecient way by exploiting some symbolic
computations with Binary Decision Diagrams. The algorithm is experimentally
shown to provide good results from the power optimization point of view, at a
limited cost in terms of area increase, when compared with similar approaches.
1
3.Hybrid Genetic Algorithms for the Traveling Salesman Problem
(P. Prinetto, M. Rebaudengo, M. Sonza Reorda International Conference on
Neural Networks and Genetic Algorithms, Innsbruck (A), Aprile 1993)
Scope:
A comparative analysis is performed on an experimental basis among four
dierent cross-over operators. In order to exploit the benets of the dierent op-
erators, a new one (called Mixed Cross-over) is introduced, trading-o the CPU
time requirements and the obtained results. A new operator is then proposed,
whose goal is to include in the genetic mechanism some heuristic knowledge
drawn from the already proposed local-optimization techniques. The perfor-
mance of the new operator is discussed.
1. 4.Thomas Bäck (Ed.): Proceedings of the 7th International Conference
on Genetic Algorithms, East Lansing, MI, USA, July 19-23, 1997.
Scope:
The Seventh International Conference on Genetic Algorithms (ICGA-97) will
be held on July 19-23, 1997, at the Kellogg Center, Michigan State University,
East Lansing, MI. This meeting will bring together an international community
from academia, government and industry interested in the eld of Evolutionary
Computation, i.e., algorithms gleaned from models of natural evolution. Exam-
ples of such algorithms are: Evolutionary Programming, Evolution Strategies,
Genetic Algorithms, Genetic Programming, Learning Classier Systems,
and similar paradigms as used in evolving e.g., Articial Life, Cellular Au-
tomata, Computational Ecosystems, Cultural Algorithms, Fuzzy Systems, Im-
mune Networks, Machine Learning, Multiagent Systems, Neural Networks, Sim-
ulated Annealing and others.
Genetic Algorithm Denitions
Grefenstette
A genetic Algorithm is an iterative procedure maintaining a population of
structures that are candidate solutions to specic domain challenges. During
each temporal increment (called a generation), the structures in the current
population are rated for their eectiveness as domain solutions, and on the
basis of these evaluations, a new population of candidate solutions is formed
using specic genetic operators such as reproduction, crossover, and mutation.
Goldberg
They combine survival of the ttest among string structures with a struc-
tured yet randomized information exchange to form a search algorithm with
2
some of the innovative air of human search. In every generation, a new set
of articial creatures (strings) is created using bits and pieces of the ttest of
the old; an occasional new part is tried for good measure. While randomized,
genetic algorithms are no simple random walk. They eciently exploit his-
torical information to speculate on new search points with expected improved
performance.
Genetic Algorithms Overview
Developed by John Holland in 1975 .Genetic Algorithms (GAs) are search
algorithms based on the mechanics of the natural selection process (biological
evolution). The most basic concept is that the strong tend to adapt and survive
while the weak tend to die out. That is, optimization is based on evolution, and
the Survival of the ttest concept. GAs have the ability to create an initial
population of feasible solutions, and then recombine them in a way to guide
their search to only the most promising areas of the state space.
Each feasible solution is encoded as a chromosome (string) also called a
genotype, and each chromosome is given a measure of tness via a tness (eval-
uation or objective) function.The tness of a chromosome determines its ability
to survive and produce ospring. A nite population of chromosomes is main-
tained.GAs use probabilistic rules to evolve a population from one generation
to the next. The generations of the new solutions are developed by genetic re-
combination operators.GAs are ways of solving problems by mimicking process
nature uses ;i.e Selection, Cross over, Mutation, and Accepting to evolve a solu-
ton to a problem. GAs are adaptive heuristic search based on the evolutionary
ideas of natural selection and genetics.
Optimization Algorithm  Genetic Algorithm
Optimization is the process that nds the best,Optimal, solution for the
problem.The Optimization problem are centered around three factors:
1. An Objective Function which is to be minimized or maximized.;
2. A set of unknows or Variables that aect the objective function.
3. A Set of constraints that allows unknown to take on certain values but
exclude others.
An Optimization problem is dened as nding the value of variables that mini-
mize or maximize the objective function while satisfying the constraints.
3
Search Optimization Algorithm
The Evolutionary Algorithms include:
• Genetic Algorithms
• Genetic Programming.
Evolutionary Algorithms are subset of Evolutionary Computation. Which is a
subeld of Articial Intelligence(AI). Genetic Algorithms (GAs) represent the
main paradigm of Evolutionary Computation.
-GAs simulate natural evolution.mimicking process the nature uses:
Selection,Cross over,Mutation, and Accepting.
-GAs simulate the survival of the ttest among individuals over
consecutive generation for solving a problem.
Development History
4
Basic genetics
All living organism consists of cells
• Each cell of a living thing contains chromosomes - strings of DNA
• Each chromosome contains a set of genes - blocks of DNA
• Each gene determines some aspect of the organism (like eye colour)
• A collection of genes is sometimes called a genotype
• A collection of aspects (like eye colour) is sometimes called a phenotype
General scheme of Evolutionary process
Biological Terminology
Chromosome : A Set of gene. Strings of DNAthat serve as a blueprint
for the organism. A Chromosome contains a solution in form of genes.
Gene : A part of chromosome ; a gene contains a part of solution. It
determines the solution.The genes are either single bits or short blocks
of adjacent bits that encode a particular element of the candidate solution.
Individual : Same as chromosome.
Population : Number of individuals present with the same length of
chromosome.
5
Fitness : The value assigned to an individual based on how far or close
insividual is from the solution;greater the tness value better the solution it
contains.
Fitness Function: The function that assigned to Fitness value to the
individual.It is problem specic.
Breeding: Taking two t individual and then intermingling there
chromosome to create new two individual.
Mutation : Changing a random gene in an individual.
Selection : Selecting individuals for creating the next generation.
Working principle
Genetic algorithm begins with a set of solution(represented by chromosome)
called the population.Solution from one population are taken and used to form
a new population.This motivated by the possibility that the new population will
better than the old one. Solutions are selected according to their tness to new
solution (Ospring); more suitable they are ,more chance they have to reduce.
Procedure Of Genetic Algorithms
Fig shows the ow chart for GAs. A GA for a particular problem must have
the following ve components.
6
1. A genetic representation for the potential solutions to the problem.
2. A way to create an initial population of potential solution.
3. An evaluvation function that plays the role of the environment,rating so-
lution in terms of their 'Fitness'.
4. Genetic operators that alter the composition of the ospring.
5. Values for the various parameters that the genetic algorithm uses.
Outline of genetic algorithm
1. [Start] Generate random population of 'n' chromosome (i.e. sutable for
solution for the problem)
2. [Fitness] Evaluvate the tness f(x) for each chromosome x in the popu-
lation.
3. [New Population] Create new population by repeating the following steps
untill new population is complete.
(a) [Selection] Select two parent chromosomes from a population
according to their tness(better the tness Bigger the chance to
be selected).
(b) [Crossover] with a crossover probability,cross over the parents
to form new ospring(children). If no crossover was per
formed,ospring is the exact copy of parents.
(c) [Mutation] with a mutation probability,mutate new ospring
at each locus (position in chromosome)
(d) [Accepting] Place new ospring in the new population.
4. [Replace] use new generated population for furthur run of the algorithm.
5. [Test] If the end condition is satised ,stop,and run the best solution in
the current population.
6. [Loop] Go to step 2
Genetic Representation
Representation or Encoding the problem in hand when applying a
GA is a vital task. Encoding can be dened as the chromosomal represnta-
tion ofthe problem.When GA was initially introduce the binary string encoding
technique was used. when it came to industrial  other scientic applications
7
applying GA directly using binary string became a problem. Because it was not
a natural coding.
Chromosomes could be: Bit strings (0101 ... 1100)
Real numbers (43.2 -33.1 ... 0.0 89.2)
Permutations of element (E11 E3 E7 ... E1 E15)
Lists of rules (R1 R2 R3 ... R22 R23)
Program elements (genetic programming) ... any data structure ...
Binary Encoding
Binary representation: Here encoding is done using sequence of 1's and 0's. The
length of the string is determined by the precision desired for the solution. Any
integer can be converted in to binary by dividing it by 2.
Examle:
A Gene represnet some data (Eye color,hair color,..)
A chromosome is an array of genes. In binary form
Gene (11100010)
Chromosome Gene 1 (11000010)
Gene2 (00001110)
Gene3 (001111010)
Gene 4 (10100011)
A chromosome should in some way contains information about solution
which it represents; It thus requires encoding. The most popular way of en-
coding is binary string like
Chromosome 1 : 1101100100110110
Chromosome 2 : 1101111000011110
Each bit in the string represent some characterestics of the solution. Binary
encoding gives many possible chromosomes even with a small number of alleles
ie possible settings for a trait. This encoding is often not natural for many prob-
lems and sometimes corrections must be made afte crossover and/or mutation
Two variable function represented by 4 bit string for each variable.
Let two variables x1 , x2 as (1011 011). Every variable will have both upper
and lower limits as Xi
L
≤ Xi ≤ Xi
U
.Because 4-bit string can represent integers
from 0 to 15, So (0000 0000) and (1111 1111) represent the points for x1 , x2
as (X1
L
,X2
L
) and (X1
U
,X2
U
) respectively.
Thus, an n-bit string can represent integers from 0 to 2
n
− 1 , i.e.2
n
integers
8
Consider a 4-bit string (0111)
The decoded value is equal to
23
∗ 3+22
∗ 1+21
∗ 1+20
∗ 1=7
Xi
L
and Xi
U
correspond to (0000) and (1111) the equivalent value for any
4-bit string can be obtained as
Xi=XL
i
(XU
i −XL
i )
2ni−1 * (decodevalueofstring)
For example variable Xi.Let XL
i =2 and XU
i =17 nd what value the 4-bit
string Xi=(1010) would represent.
Decoded value Si=1010=23
∗ 1+22
∗ 0+21
∗ 1+20
∗ 8 = 10 then
Xi = 2+(17−2)
(24−1) ∗ 10 = 12
Permutation encoding
Permutation encoding can be used in ordering problems, such as TSP or
task ordering problem.
1. In permutation encoding, every chromosome is a string of numbers that
represent a position in a sequence.
Chromosome A 1 5 3 2 6 4 7 9 8
Chromosome B 8 5 6 7 2 3 1 4 9
2. Permutation encoding is useful for ordering problems. For some problems,
crossover and mutation corrections must be made to leave the chromosome
consistent.
Example
TSP are cities and given distance between them. Traveling salesman has to
visit all of them, but the does not want to travel more than necessary. Find
a sequence of cities with a minimal traveled distance. Here, encoded chrom-
osome describe the order of cities the salesman visits.
9
Tree Encoding
In tree encoding, every chromosome is a tree of some objects, such as func-
tions or commands in programming language. Tree encoding is useful for
evolving programs or any other structures that can be encoded in trees. The
crossover and mutation can be done relatively easy way.
Genetic operators
Genetic operators used in genetic algorithm to maintain genetic diversity.
Genetic diversity or validation is a necessity for the process of evaluvat-
ion.Genetic operators are analogous to those which occur in the natural
world.
There is mainly three Genetic operators.
• Reproduction (or Selection)
• Crossover (or Recombination)
• Mutation
SELECTION
Selection will allow selection rules and random behavior to select next popu
lation.In selection the parents must be selected based on their tness.The
individuals with a higher tness must have a higher probability of having
ospring.Fitness value F is calculated The probability of selection of ith
10
chromosome is done
Pi=
Fi
j
popsize
−Fj
The cumulative frequency qi=
i
j=iPj
Generate a random number r from the range [0, z]
If r  q1, select the rst chromosome, otherwise select chromosome from 2
to pop_size
There are several methods for selection.
• Roulette-wheel selection.
• Tournament selection.
• Rank selection.
• Steady-state selection.
• Boltzmann selection.
• Scaling selection
Roulette-Wheel Selection
Roulette: the classical selection operator for generational GA as described
by Goldberg. Each member of the pool is assigned space on a roulette wheel
proportional to its tness. The members with the greatest tness have the
highest probability of selection. This selection technique works only for a
GA which maximizes its objective function.
Concept : the chance of an individual's being selected is proportional to its
tness, greater or less than its competitors' tness.
The Probability of parenthood is proportional to tness.The wheel is spun
until two parents are selected.The two parents create one ospring.The proce
ss is repeated to create a new population for the next generation.
Roulette wheel selection has problems if the tness changes by orders of
magnitude.If two individuals have a much higher tness, they could be the
parents for every child in the next generation.
11
Reason Not to Use the Roulette Wheel
• If the tness value for all individuals is very close, the parents will be
chosen with equal probability, and the function will cease to optimize.
• Roulette selection is very sensitive to the form of the tness function and
generally requires modications to work at all.
Boltzmann Selection
Simulated annealing is a method used to minimize or maximize a function.
This method simulates the process of slow cooling of molten metal to achieve the
minimum function value in a minimum function value in a minimization prob-
lem.The cooling phenomena is simulated by controlling a temprature like pa-
rameter introduced with the concept of Boltzmann probability distribution.The
system in thermal equilibrium at a temperature T has its energy distribution
based on the probability dened by
P(E) =exp(-E/KT) where K is Boltzzmann constant
This expression suggest that a system at a higher temperature has almost
uniform probability at any energy state, but at lower temperature it has a
small probability of being at ahigher energy state.
Crossover
Two parents produce two ospring .There is a chance that the chromosomes
of the two parents are copied unmodied as ospring .There is a chance that the
chromosomes of the two parents are randomly recombined (crossover) to form
ospring .Generally the chance of crossover is between 0.6 and 1.0 .Crossover
is usually the primary operator with mutation serving only as a mechanism to
12
introduce diversity in the population. However, there are a number of crossover
operators that have been used on binary and real-coded GAs.There are various
crossover
1.Single-point Crossover
2.Double-point Crossover
3.Multiple-point Crossover
4.Uniform Crossover
5.Matrix Crossover
6.Random Crossover
7.Permutation-based Crossover
8.Partial mapped crossover
9.Ordered crossover(OX)
10.Position based crossover(OX)
11.Ordered based crossover
12.Cycle crossover(CX)
13.Sub-tour exchange crossover
14.Heuristic crossover
1.Single-point Crossover
Given two parents, single-point crossover will generate a cut-point and re-
combines the rst part of rst parent with the second part of the second parent
to create one ospring. Single-point crossover then recombines the second part
of the rst parent with the rst part of the second parent to create a second
ospring.
+ Parents 0 1 1 1 1
1 1 1 1 1
Children 1 1 1 1 1
0 1 1 1 1
2.Double-point Crossover
Two-Point crossover is very similar to single-point crossover except that two
cut-points are generated instead of one.
13
Parents 0 1 1 0 1
1 1 1 1 1
Children 1 1 1 1 1
0 1 1 0 1
3.Multiple-point Crossover
In the multiple point crossover operation,the crossover opertion takes placeat
even and odd numbered sites.In the case of even numbered crossover sites the
chromosomal string is viewed as a ring with no begning and end and crossover
site are selected around the circle at random.In case of odd numbered crossover
site,the crossover point is choosen at the begining of the string.
Parents 1 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0
Children 0 0 1 0 0 1 1 1 0 0
1 1 0 1 1 0 0 0 1 1
4.Uniform Crossover
In Uniform Crossover, a value of the rst parent's gene is assigned to the
rst ospring and the value of the second parent's gene is to the second ospring
with probability 0.5. With probability 0.5 the value of the rst parent's gene
is assigned to the second ospring and the value of the second parent's gene is
assigned to the rst ospring.
Parents 1 1 1 0 0 1 0 0
1 1 0 1 0 1 1 1
Children 1 1 0 0 0 1 0 1
1 1 1 1 0 1 1 0
5.Matrix Crossover
It is used in two dimensional array .Here the rows columns of the crossover
sites select randomly.Thus two crossover sites forms a three layer matrix.Then
we can select any region.
Parents
1 0 1
1 0 1
0 1 1
0 0 1
1 0 0
1 1 0
14
Children
0 0 1
1 0 1
0 1 0
1 0 1
1 0 0
1 1 1
6..Random Crossover
Random Crossover creates ospring randomly within a hyper-rectangle de
ned by the parent points. There are two type :
i).Flat crossover: An ospring is produced by uniformly picking a value for
each gene from a range of values correspond to the parent node .
ii).Blend crossover: which incorporates more variance by picking values that
lies between 2 points containing the 2 parents .
7.Permutation-based Crossover
Permutation-based Crossover is developed mainly for combinatorial opti
mization problem such as the travelling salesman problem,machine schedu
ling,resource allocation etc.These operators function based on two approaches:
i).Canonical approach :It is the extention of Double-point Crossover Mul
tiple point Crossover of binary string .it is based on blind random mechanism.
There is no guarantee that an ospring produced by this method is better
than its parents .
ii).Heuristic approach :Application of Heuristic in crossover tends to generate
improved ospring .
8.Partial mapped crossover
Goldberg and Lingle proposed this method. Adopts a repairing procedure
known as relationship mapping in order to prevent illegal duplication of genes.
Extension to double point crossover. Figure shows the two substrings are se-
lected and exchanged between the parent chromosomes.
Parents 1 2 4 6 5 3 9 8 7
3 8 5 1 2 9 6 4 7
Mapping Relationship
93
254
16
Children 6 4 5 1 2 9 3 8 7
9 8 4 6 5 3 1 2 7
15
9.Ordered crossover(OX)
Davis proposed this method,which is an extension of partial mapped crosover
with a dierent repairing procedure.Asubstring is selected reandomly from one
of the parents and copied exactly to the same positions in the child string.
Parents 4 5 2 1 3 7 8 6 9
5 7 3 2 1 4 6 8 9
Children 5 2 4 1 3 7 8 6 9
5 3 7 2 1 4 6 8 9
12.Cycle crossover(CX)
Oliver,Smith and Holland proposed this crossover. The nodes to be selected
from parent are dened by a cycle according to the corresponding position be-
tween the parents.
Parents 1 Parent 2 Child1 Parent 2 Parent 1 Child2
4
5
2
1
3
8
7
6
9
5
7
3
2
1
4
6
8
9
4
5
3
2
1
8
7
6
9
5
7
3
2
1
4
6
8
9
4
5
2
1
3
8
7
6
9
5
7
2
1
3
4
6
8
9
Cycle=457684 Cycle=548675
14.Heuristic crossover
A random node is picked at the start then its shortest edge that does not
lesd to a cycle is selected. If two edges lead to a cycle,then a random node is
picked up again and the above process is continued till all the nodes are checked
and a new ospring is produced. This approavh is very ecient compared to
other permutation operator.
MutationMutation is a genetic operator used to maintain genetic diversity from one
generation of a population of genetic algorithm chromosomes to the next. There
are three type of mutation:
1.Boundary Mutation
2.Uniform Mutation
3.Non-uniform Mutation
16
1.Uniform Mutation
A mutation operator that replaces the value of the chosen gene with a uni-
form random value selected between the user-specied upper and lower bounds .
This mutation operator can only be used for integer and oat genes. Let
consider a chromosome x
t
=[x1, x2, ......xm].A random number is selected such
that ,k [1,n] then ospring xt+1
= x1, xk, ..............xm is produced. Xk is
the random value generated from range
[xl
k,xu
m]
2.Boundary Mutation
This mutation operator replaces the genome with either lower or upper
bound randomly. This can be used for integer and oat genes. The replacement
of gene X'k by either x
l
k,x
u
m
3.Non-uniform Mutation
The probability that amount of mutation will go to 0 with the next gener-
ation is increased by using non-uniform mutation operator. It keeps the popu-
lation from stagnating in the early stages of the evolution. It tunes solution in
later stages of evolution. This mutation operator can only be used for integer
and oat genes.
X'L= XL + (t, Xu
L − Xk ),if the random digit is 0
X'L= Xt − (t, Xt − XL
k ),if the random digit is 1
One's Complement Operator
This Operator is Known as unary operator(~). It causes the bits of its
operand to be inverted.
1 0 1 1 0 0 1 1
output
0 1 0 0 1 1 0 0
Logical bitwise operator
This operator can furthur classied in to
1. bitwise AND
2. bitwise exclusive(X)-OR
3. bitwise OR.
17
Two Bitwise Operator used
X Y AND XOR OR
0 0 0 0 0
0 1 0 1 1
1 0 0 1 1
1 1 1 0 1
AND Operator OR Operator
Px 0 1 0 1 1 1 0 PX 0 1 0 1 1 1 0
qx 1 1 1 1 0 1 1 qx 0 1 1 1 0 1 1
Cxy 0 1 0 1 0 1 0 Cxy 0 1 1 1 1 1 1
Shift Operator
Shift operator 2 types.
1. Shift Left() operator.
2. Shift Right() operator.
Shift Left Operator
1 0 0 1 1 1
0 0 1 1 1 0
Shift Right Operator
1 0 0 1 1 1
0 1 0 0 1 1
Masking Operator
Transform a given bit pattern in to another form with the help of Logical
bitwse operation. There is a mask operand is used.
px 1 0 1 0 1 1 1 0
masking table 0 1 1 1 0 0 1 1
o/p 0 0 1 0 0 0 1 0
Mutation schemas are:
1. Inversion.
2. Insertion.
3. Displacement.
4. Reciprocal Exchange Mutation.
Insertion.
Node selected random and inserted random.
px 2 5 4 9 6 8 1 3
cx 2 6 4 9 5 8 1 3
Displacement
Select substring random  insert it in random position.
18
px 2 5 4 9 6 8 1 3
cx 2 6 4 5 9 8 1 8
Inversion.
Two position randomly selected. Substring between these two position is
inverted.
px 2 5 4 9 6 8 1 3
cx 2 5 8 6 9 4 1 3
Reciprocal Exchange Mutation.
Select two position at random then swap the nodes in these position.
px 2 5 4 9 6 8 1 3
cx 2 6 4 9 5 8 1 3
Applications of Genetic Algorithms
1. Traveling Salesman Problem
2. Decoding A Secret Message
3. Robot Trajectory Planning
4. Optimizing Articial Neural Nets
1. 1.Traveling Salesman Problem
The goal is to nd the shortest route for a salesperson to take in visiting N
cities The cost function for the simplest form of the problem is just the distance
traveled by the salesperson for the given ordering (xn, yn), n = 1,...,
where (xn, yn) are the coordinates of the nth city visited
cost=
N
n=0 (xn − xn+1)2
+(yn − yn+1)
2
Let N = 13 cities and put the starting and ending point at the origin, so
(x0, y0) = (xn+1, yn+1) = (0, 0)= starting and ending point .There are a total
of 13!/2 = 3.1135 x 109 possible combinations to check. Assumes that all the
cities lie in a rectangle and the minimum distance is 14 .The GA parameters
Npop = 400 Nkeep = 200 Mutation Rate = 0.04
19
Here we use Permutation Encoding
Eg : [ 2 5 3 1 4 6 ]
Crossover Operator is a variation of Cyclic Crossover
parent 1 : [ 1 5 3 6 4 2 ]
parent 2 : [ 4 2 1 3 6 5 ]
Randomly select a location to exchange
ospring 1 : [ 4 5 3 6 4 2 ]
ospring 2 : [ 1 2 1 3 6 5 ]
Next Step
ospring 1 : [ 4 5 3 6 4 2 ]
ospring 2 : [ 1 2 1 3 6 5 ]
ospring 1 : [ 4 5 3 6 6 2 ]
ospring 2 : [ 1 2 1 3 4 5 ]
ospring 1 : [ 4 5 3 3 6 2 ]
ospring 2 : [ 1 2 1 6 4 5 ]
ospring 1 : [ 4 5 1 3 6 2 ]
ospring 2 : [ 1 2 3 6 4 5 ] .
The process iterates until we return to the rst exchanged site .The mutation
operator randomly chooses a string, selecting two random sites within that
string, and exchanges the integers at those sites Convergence of the genetic
algorithm .
If we doesn't have an obvious minimum path, then The optimal solution will
have no crossing paths So plot the solution and check
20
Decoding A Secret Message
Uses a GA to break a secret code. A message consisting of letters and spaces
is encoded by randomly changing one letter to another letter. If the message
uses every letter in the alphabet plus a space, then there are a total of 27!
possible codes, with only one being correct .If the message uses S symbols, then
there are 27! - S! possible encodings that work. A chromosome consists of 27
genes with unique values from 1 to 27 A 1 corresponds to a space and 2 through
27 correspond to the letters of the alphabet. Letters and spaces in the message
receive the appropriate numeric values.
The cost is calculated by subtracting the guess of the message from the
known message, taking the absolute value, and summing:
cost=
N
n=t[message(n) − guess(n)]
Optimizing Articial Neural Nets
In training process of Neural Networks, the weights and bias values are
optimized to produce the desired output. GA to compute the optimum weights
and biases
To do this, we used the two-layer neural network with log-sigmoid transfer
functions .The goal is to compute the optimum weights and biases of the ANN
f(x)=12/x
2
cos(x)+1/x for 1=x=5
using GA
21
The GA chromosome is made up of potential weights and biases .The GA
cost function computes the mean square dierence between the current guess
of the function and the exact function evaluated at specic points in x.The
function computed from the neural network with GA hybrid training matches
the known curve quite well.
Other Applications
• Locating An Emergency Response
• Unit Stealth Design Building
• Dynamic Inverse Models
• Combining Gas With Simulationsair Pollution Receptor Modeling
• Antenna Array Design
22
Support Vector machine
one of the most well studied and widely used learning algorithms for binary
classication Extensions of SVMs exist for a variety of other learning problems,
including regression, multiclass classication, ordinal regression, ranking, struc-
tured prediction, and many others. Similar to perceptrons they aim to nd a
hyper plane that linearly separates data points belong to dierent classes In
addition SVMs aim to nd the hyper plane that is least likely to overt the
training data.
Support Vector Machines are based on the concept of decision planes that
dene decision boundaries. A decision plane is one that separates between a set
of objects having dierent class memberships. A schematic example is shown in
the illustration below.
In this example, the objects belong either to class GREEN or RED. The
separating line denes a boundary on the right side of which all objects are
GREEN and to the left of which all objects are RED. Any new object (white
circle) falling to the right is labeled, i.e., classied, as GREEN (or classied as
RED should it fall to the left of the separating line).
We are given a set of n points (vectors) : such that is a vector of length m ,
and each belong to one of two classes we label them by +1 and -1. -So our
training set is: (x1,y1),(x2,y2),.........(xn,yn)∀ixi Rm
, yi {+1, −1}We want to
nd a separating hyperplane
w.x + b = 0
that separates these points into the two classes. The positives (class +1)
and The negatives (class -1). (Assuming that they are linearly separable)
23
Separating Hyperplane
Suppose we choose the hypreplane (seen below) that is close to some sample
xi.Now suppose we have a new point x that should be in class -1 and is close
to xi . Using our classication function f (x) this point is misclassied or Poor
generalization.
So Hyperplane should be as far as possible from any sample point.As shown
in the below gure.This way a new data that is close to the old samples will be
classied correctly.Which perform good generalization.
24
Linearly Separable Data - Hard Margin SVM
It is the SVM idea is to maximize the distance between The hyperplane and
the closest sample point.Shown in below gure. In the optimal hyper- plane:
Distance to the closest negative point = Distance to the closest
positive point.
SVM's goal is to maximize the Margin which is twice the distance d
between the separating hyperplane and the closest sample.
Why it is the best?
• Robust to outliners as we saw and thus strong generalization ability.
• It proved itself to have better performance on test data in both practice
and in theory.
Support vectors are the samples closest to the separating hyperplane.
25
A training sample S = ((x1; y1); : : : ; (xm; ym)) 2 (R { -1; 1}) is said to
be linearly separable if there exists a linear classier
h(x) = sign(w.x + b)
which classies all examples in S correctly, i.e. for which yi(w.xi + b)  0 {1;
: : : ;m}g. For example, Figure (left) shows a training sample in R2
that is
linearly separable, together with two possible linear classiers that separate the
data correctly.
Fig : Left: A linearly separable data set, with two possible linear classiers
that separate the data. Blue circles represent class label 1 and red crosses 1; the
arrow represents the direction of positive classication. Right: The same data
set and classiers, with margin of separation shown.
Although both classiers separate the data, the distance or margin with which
the separation is achieved is dierent; this is shown in Figure 1 (right). The
SVM algorithm selects the maximum margin classier, i.e. the linear classier
that separates the training data with the largest margin. More precisely, dene
the (geometric) margin of a linear classierh (x) = sign (w.x + b) on an example
(xi, yi) Rn
X {−1, 1} εRn
X{−1, 1} as γi=
yi(w.Xi+b)
W
where ||w|| denotes the Euclidean norm of w. (Note that the distance of xi
from the hyperplane [w.x + b = 0]
Let us look at our decision boundary :This separating hyperplane equation
is : wt
x + b = 0. where w Rm
,x Rm
,b R.Note that
w
w is orthogonal to the
26
separating hyperplane and its length is 1. Let γibe the distance between the
hyperplane and Some training example xi . So γi is the length of the segment
from p to xi.
Learning linear SVM
It is convenient to represent classes by +1 and -1 using y = 1; if wx+b 
0 , -1; if wx+b  0 w can be rescaled such that for all points x lying on the
respective boundaries it holds that wx+b = 1 or wx+b = -1 These points are
called the support vectors
The task of learning a linear SVM consists of estimating the parameters w
and b The rst criterion is that all points in the training data must be classied
correctly: w.xi + b ≥ 1 if yi = 1 w.xi+b ≤ -1 if yi = -1 This can be re-written
as: yi(w.xi+b) ≥ 1 for 1≤ i ≤ N
27

More Related Content

PPTX
Evolutionary computing - soft computing
PPTX
PPTX
Activation functions
PDF
Genetic Algorithms
PPTX
States, state graphs and transition testing
PPTX
MACHINE LEARNING - GENETIC ALGORITHM
PDF
Region Splitting and Merging Technique For Image segmentation.
PPSX
Fuzzy expert system
Evolutionary computing - soft computing
Activation functions
Genetic Algorithms
States, state graphs and transition testing
MACHINE LEARNING - GENETIC ALGORITHM
Region Splitting and Merging Technique For Image segmentation.
Fuzzy expert system

What's hot (20)

PPTX
Register allocation and assignment
PDF
Convolutional Neural Networks (CNN)
PPTX
Genetic algorithms
PPTX
Ensemble methods in machine learning
PPTX
wireless network IEEE 802.11
PPTX
Radial basis function network ppt bySheetal,Samreen and Dhanashri
PDF
If then rule in fuzzy logic and fuzzy implications
PPTX
House price prediction
PPTX
Deep Learning With Neural Networks
PPTX
Deep neural networks
PPTX
ELEMENTS OF TRANSPORT PROTOCOL
PPT
Genetic Algorithms - Artificial Intelligence
PDF
Informed search
PPTX
Recurrent Neural Networks (RNNs)
PPTX
Introduction to genetic programming
PPT
Knowledge Representation in Artificial intelligence
PDF
Machine Learning: Introduction to Neural Networks
PPTX
Genetic Algorithm
PDF
State Space Representation and Search
PPTX
Genetic programming
Register allocation and assignment
Convolutional Neural Networks (CNN)
Genetic algorithms
Ensemble methods in machine learning
wireless network IEEE 802.11
Radial basis function network ppt bySheetal,Samreen and Dhanashri
If then rule in fuzzy logic and fuzzy implications
House price prediction
Deep Learning With Neural Networks
Deep neural networks
ELEMENTS OF TRANSPORT PROTOCOL
Genetic Algorithms - Artificial Intelligence
Informed search
Recurrent Neural Networks (RNNs)
Introduction to genetic programming
Knowledge Representation in Artificial intelligence
Machine Learning: Introduction to Neural Networks
Genetic Algorithm
State Space Representation and Search
Genetic programming
Ad

Similar to Genetic algorithm (20)

PPTX
AI_presentation.pptx
PPTX
Optimization technique genetic algorithm
PDF
Genetic Algorithm
PPTX
2020 6 16_ga_introduction
PPTX
FUZZY GENETIC HYBRID SYSTEM of neural system.pptx
PPTX
Genetic algorithms
PPTX
Genetic Algorithm
PPTX
Genetic Algorithm Fundamentals and Applications.pptx
PDF
A Genetic Algorithm Problem Solver For Archaeology
PDF
generic optimization techniques lecture slides
PDF
Genetic Algorithm in Machine Learning PPT by-Adi
PDF
3_GO_Olesya_Genetic_AlgorithmsOPTIMZTION.p.pdf
PPTX
GA of a Paper 2012.pptx
PDF
PPTX
Flowchart of ga
PDF
Data Science - Part XIV - Genetic Algorithms
PDF
E034023028
PDF
Introduction to Genetic Algorithms and Evolutionary Computation
PDF
Introduction to Genetic Algorithms 2014
AI_presentation.pptx
Optimization technique genetic algorithm
Genetic Algorithm
2020 6 16_ga_introduction
FUZZY GENETIC HYBRID SYSTEM of neural system.pptx
Genetic algorithms
Genetic Algorithm
Genetic Algorithm Fundamentals and Applications.pptx
A Genetic Algorithm Problem Solver For Archaeology
generic optimization techniques lecture slides
Genetic Algorithm in Machine Learning PPT by-Adi
3_GO_Olesya_Genetic_AlgorithmsOPTIMZTION.p.pdf
GA of a Paper 2012.pptx
Flowchart of ga
Data Science - Part XIV - Genetic Algorithms
E034023028
Introduction to Genetic Algorithms and Evolutionary Computation
Introduction to Genetic Algorithms 2014
Ad

More from Respa Peter (14)

PPTX
Tpes of Softwares
PPTX
Information technology for business
DOCX
Types of sql injection attacks
DOCX
DataMining Techniq
DOCX
Database
DOCX
software failures
DOCX
Cloud computing
DOCX
Managing software development
DOCX
Data mining
DOCX
DOCX
Matrix multiplicationdesign
PPTX
Web services have made the development of mobile Web applications much easier...
PPTX
Matrix chain multiplication
PPTX
Open shortest path first (ospf)
Tpes of Softwares
Information technology for business
Types of sql injection attacks
DataMining Techniq
Database
software failures
Cloud computing
Managing software development
Data mining
Matrix multiplicationdesign
Web services have made the development of mobile Web applications much easier...
Matrix chain multiplication
Open shortest path first (ospf)

Recently uploaded (20)

PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
PDF
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PDF
01-Introduction-to-Information-Management.pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
History, Philosophy and sociology of education (1).pptx
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
Classroom Observation Tools for Teachers
PDF
Yogi Goddess Pres Conference Studio Updates
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
RMMM.pdf make it easy to upload and study
PPTX
Final Presentation General Medicine 03-08-2024.pptx
Supply Chain Operations Speaking Notes -ICLT Program
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
Chinmaya Tiranga quiz Grand Finale.pdf
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
01-Introduction-to-Information-Management.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
Microbial diseases, their pathogenesis and prophylaxis
Microbial disease of the cardiovascular and lymphatic systems
History, Philosophy and sociology of education (1).pptx
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Final Presentation General Medicine 03-08-2024.pptx
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Classroom Observation Tools for Teachers
Yogi Goddess Pres Conference Studio Updates
Weekly quiz Compilation Jan -July 25.pdf
RMMM.pdf make it easy to upload and study
Final Presentation General Medicine 03-08-2024.pptx

Genetic algorithm

  • 1. Module 3 Genetic Algorithm International conference paper on genetic algorithms 1.Evolutionary Simulation-Based Validation (F. Corno, M. Sonza Reorda, G. Squillero International Journal on Articial Intelligence Tools (IJAIT), Vol. 14, 1-2, Dec. 2004) Scope: This paper describes evolutionary simulation-based validation, a new point in the spectrum of design validation techniques, besides pseudo-random simu- lation, designer-generated patterns and formal verication. The proposed ap- proach is based on coupling an evolutionary algorithm with a hardware simu- lator, and it is able to t painlessly in an existing industrial ow. Prototypical tools were used to validate gate-level designs, comparing them against both their RT-level specications and dierent gate-level implementations. Experimental results show that the proposed method is eectively able to deal with realistic designs, discovering potential problems, and, although approximate in nature, it is able to provide a high degree of condence in the results and it exhibits a natural robustness even when used starting from incomplete information 2.Exploiting Symbolic Techniques within Genetic Algorithms for Power Op- timization (S. Chiusano, F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda IC- TAI97: 9th IEEE International Conference on Tools with Articial Intelligence, Newport Beach, CA (USA), November 1997 )CV. Ramamoorthy Best Paper Award Scope: This paper proposes an optimization algorithm for reducing the power dis- sipation in a sequential circuit. The encoding of the dierent states in a Finite State Machine is modied to obtain a functionally equivalent circuit that ex- hibits a reduced power dissipation. The algorithm is based on a newly proposed power estimation function, that is able to quickly give an accurate estimate of the dissipated power without actually synthesizing the circuit. Given this esti- mate, a Genetic Algorithm provides a state re-encoding for the circuit. The esti- mation function is computed in a very ecient way by exploiting some symbolic computations with Binary Decision Diagrams. The algorithm is experimentally shown to provide good results from the power optimization point of view, at a limited cost in terms of area increase, when compared with similar approaches. 1
  • 2. 3.Hybrid Genetic Algorithms for the Traveling Salesman Problem (P. Prinetto, M. Rebaudengo, M. Sonza Reorda International Conference on Neural Networks and Genetic Algorithms, Innsbruck (A), Aprile 1993) Scope: A comparative analysis is performed on an experimental basis among four dierent cross-over operators. In order to exploit the benets of the dierent op- erators, a new one (called Mixed Cross-over) is introduced, trading-o the CPU time requirements and the obtained results. A new operator is then proposed, whose goal is to include in the genetic mechanism some heuristic knowledge drawn from the already proposed local-optimization techniques. The perfor- mance of the new operator is discussed. 1. 4.Thomas Bäck (Ed.): Proceedings of the 7th International Conference on Genetic Algorithms, East Lansing, MI, USA, July 19-23, 1997. Scope: The Seventh International Conference on Genetic Algorithms (ICGA-97) will be held on July 19-23, 1997, at the Kellogg Center, Michigan State University, East Lansing, MI. This meeting will bring together an international community from academia, government and industry interested in the eld of Evolutionary Computation, i.e., algorithms gleaned from models of natural evolution. Exam- ples of such algorithms are: Evolutionary Programming, Evolution Strategies, Genetic Algorithms, Genetic Programming, Learning Classier Systems, and similar paradigms as used in evolving e.g., Articial Life, Cellular Au- tomata, Computational Ecosystems, Cultural Algorithms, Fuzzy Systems, Im- mune Networks, Machine Learning, Multiagent Systems, Neural Networks, Sim- ulated Annealing and others. Genetic Algorithm Denitions Grefenstette A genetic Algorithm is an iterative procedure maintaining a population of structures that are candidate solutions to specic domain challenges. During each temporal increment (called a generation), the structures in the current population are rated for their eectiveness as domain solutions, and on the basis of these evaluations, a new population of candidate solutions is formed using specic genetic operators such as reproduction, crossover, and mutation. Goldberg They combine survival of the ttest among string structures with a struc- tured yet randomized information exchange to form a search algorithm with 2
  • 3. some of the innovative air of human search. In every generation, a new set of articial creatures (strings) is created using bits and pieces of the ttest of the old; an occasional new part is tried for good measure. While randomized, genetic algorithms are no simple random walk. They eciently exploit his- torical information to speculate on new search points with expected improved performance. Genetic Algorithms Overview Developed by John Holland in 1975 .Genetic Algorithms (GAs) are search algorithms based on the mechanics of the natural selection process (biological evolution). The most basic concept is that the strong tend to adapt and survive while the weak tend to die out. That is, optimization is based on evolution, and the Survival of the ttest concept. GAs have the ability to create an initial population of feasible solutions, and then recombine them in a way to guide their search to only the most promising areas of the state space. Each feasible solution is encoded as a chromosome (string) also called a genotype, and each chromosome is given a measure of tness via a tness (eval- uation or objective) function.The tness of a chromosome determines its ability to survive and produce ospring. A nite population of chromosomes is main- tained.GAs use probabilistic rules to evolve a population from one generation to the next. The generations of the new solutions are developed by genetic re- combination operators.GAs are ways of solving problems by mimicking process nature uses ;i.e Selection, Cross over, Mutation, and Accepting to evolve a solu- ton to a problem. GAs are adaptive heuristic search based on the evolutionary ideas of natural selection and genetics. Optimization Algorithm Genetic Algorithm Optimization is the process that nds the best,Optimal, solution for the problem.The Optimization problem are centered around three factors: 1. An Objective Function which is to be minimized or maximized.; 2. A set of unknows or Variables that aect the objective function. 3. A Set of constraints that allows unknown to take on certain values but exclude others. An Optimization problem is dened as nding the value of variables that mini- mize or maximize the objective function while satisfying the constraints. 3
  • 4. Search Optimization Algorithm The Evolutionary Algorithms include: • Genetic Algorithms • Genetic Programming. Evolutionary Algorithms are subset of Evolutionary Computation. Which is a subeld of Articial Intelligence(AI). Genetic Algorithms (GAs) represent the main paradigm of Evolutionary Computation. -GAs simulate natural evolution.mimicking process the nature uses: Selection,Cross over,Mutation, and Accepting. -GAs simulate the survival of the ttest among individuals over consecutive generation for solving a problem. Development History 4
  • 5. Basic genetics All living organism consists of cells • Each cell of a living thing contains chromosomes - strings of DNA • Each chromosome contains a set of genes - blocks of DNA • Each gene determines some aspect of the organism (like eye colour) • A collection of genes is sometimes called a genotype • A collection of aspects (like eye colour) is sometimes called a phenotype General scheme of Evolutionary process Biological Terminology Chromosome : A Set of gene. Strings of DNAthat serve as a blueprint for the organism. A Chromosome contains a solution in form of genes. Gene : A part of chromosome ; a gene contains a part of solution. It determines the solution.The genes are either single bits or short blocks of adjacent bits that encode a particular element of the candidate solution. Individual : Same as chromosome. Population : Number of individuals present with the same length of chromosome. 5
  • 6. Fitness : The value assigned to an individual based on how far or close insividual is from the solution;greater the tness value better the solution it contains. Fitness Function: The function that assigned to Fitness value to the individual.It is problem specic. Breeding: Taking two t individual and then intermingling there chromosome to create new two individual. Mutation : Changing a random gene in an individual. Selection : Selecting individuals for creating the next generation. Working principle Genetic algorithm begins with a set of solution(represented by chromosome) called the population.Solution from one population are taken and used to form a new population.This motivated by the possibility that the new population will better than the old one. Solutions are selected according to their tness to new solution (Ospring); more suitable they are ,more chance they have to reduce. Procedure Of Genetic Algorithms Fig shows the ow chart for GAs. A GA for a particular problem must have the following ve components. 6
  • 7. 1. A genetic representation for the potential solutions to the problem. 2. A way to create an initial population of potential solution. 3. An evaluvation function that plays the role of the environment,rating so- lution in terms of their 'Fitness'. 4. Genetic operators that alter the composition of the ospring. 5. Values for the various parameters that the genetic algorithm uses. Outline of genetic algorithm 1. [Start] Generate random population of 'n' chromosome (i.e. sutable for solution for the problem) 2. [Fitness] Evaluvate the tness f(x) for each chromosome x in the popu- lation. 3. [New Population] Create new population by repeating the following steps untill new population is complete. (a) [Selection] Select two parent chromosomes from a population according to their tness(better the tness Bigger the chance to be selected). (b) [Crossover] with a crossover probability,cross over the parents to form new ospring(children). If no crossover was per formed,ospring is the exact copy of parents. (c) [Mutation] with a mutation probability,mutate new ospring at each locus (position in chromosome) (d) [Accepting] Place new ospring in the new population. 4. [Replace] use new generated population for furthur run of the algorithm. 5. [Test] If the end condition is satised ,stop,and run the best solution in the current population. 6. [Loop] Go to step 2 Genetic Representation Representation or Encoding the problem in hand when applying a GA is a vital task. Encoding can be dened as the chromosomal represnta- tion ofthe problem.When GA was initially introduce the binary string encoding technique was used. when it came to industrial other scientic applications 7
  • 8. applying GA directly using binary string became a problem. Because it was not a natural coding. Chromosomes could be: Bit strings (0101 ... 1100) Real numbers (43.2 -33.1 ... 0.0 89.2) Permutations of element (E11 E3 E7 ... E1 E15) Lists of rules (R1 R2 R3 ... R22 R23) Program elements (genetic programming) ... any data structure ... Binary Encoding Binary representation: Here encoding is done using sequence of 1's and 0's. The length of the string is determined by the precision desired for the solution. Any integer can be converted in to binary by dividing it by 2. Examle: A Gene represnet some data (Eye color,hair color,..) A chromosome is an array of genes. In binary form Gene (11100010) Chromosome Gene 1 (11000010) Gene2 (00001110) Gene3 (001111010) Gene 4 (10100011) A chromosome should in some way contains information about solution which it represents; It thus requires encoding. The most popular way of en- coding is binary string like Chromosome 1 : 1101100100110110 Chromosome 2 : 1101111000011110 Each bit in the string represent some characterestics of the solution. Binary encoding gives many possible chromosomes even with a small number of alleles ie possible settings for a trait. This encoding is often not natural for many prob- lems and sometimes corrections must be made afte crossover and/or mutation Two variable function represented by 4 bit string for each variable. Let two variables x1 , x2 as (1011 011). Every variable will have both upper and lower limits as Xi L ≤ Xi ≤ Xi U .Because 4-bit string can represent integers from 0 to 15, So (0000 0000) and (1111 1111) represent the points for x1 , x2 as (X1 L ,X2 L ) and (X1 U ,X2 U ) respectively. Thus, an n-bit string can represent integers from 0 to 2 n − 1 , i.e.2 n integers 8
  • 9. Consider a 4-bit string (0111) The decoded value is equal to 23 ∗ 3+22 ∗ 1+21 ∗ 1+20 ∗ 1=7 Xi L and Xi U correspond to (0000) and (1111) the equivalent value for any 4-bit string can be obtained as Xi=XL i (XU i −XL i ) 2ni−1 * (decodevalueofstring) For example variable Xi.Let XL i =2 and XU i =17 nd what value the 4-bit string Xi=(1010) would represent. Decoded value Si=1010=23 ∗ 1+22 ∗ 0+21 ∗ 1+20 ∗ 8 = 10 then Xi = 2+(17−2) (24−1) ∗ 10 = 12 Permutation encoding Permutation encoding can be used in ordering problems, such as TSP or task ordering problem. 1. In permutation encoding, every chromosome is a string of numbers that represent a position in a sequence. Chromosome A 1 5 3 2 6 4 7 9 8 Chromosome B 8 5 6 7 2 3 1 4 9 2. Permutation encoding is useful for ordering problems. For some problems, crossover and mutation corrections must be made to leave the chromosome consistent. Example TSP are cities and given distance between them. Traveling salesman has to visit all of them, but the does not want to travel more than necessary. Find a sequence of cities with a minimal traveled distance. Here, encoded chrom- osome describe the order of cities the salesman visits. 9
  • 10. Tree Encoding In tree encoding, every chromosome is a tree of some objects, such as func- tions or commands in programming language. Tree encoding is useful for evolving programs or any other structures that can be encoded in trees. The crossover and mutation can be done relatively easy way. Genetic operators Genetic operators used in genetic algorithm to maintain genetic diversity. Genetic diversity or validation is a necessity for the process of evaluvat- ion.Genetic operators are analogous to those which occur in the natural world. There is mainly three Genetic operators. • Reproduction (or Selection) • Crossover (or Recombination) • Mutation SELECTION Selection will allow selection rules and random behavior to select next popu lation.In selection the parents must be selected based on their tness.The individuals with a higher tness must have a higher probability of having ospring.Fitness value F is calculated The probability of selection of ith 10
  • 11. chromosome is done Pi= Fi j popsize −Fj The cumulative frequency qi= i j=iPj Generate a random number r from the range [0, z] If r q1, select the rst chromosome, otherwise select chromosome from 2 to pop_size There are several methods for selection. • Roulette-wheel selection. • Tournament selection. • Rank selection. • Steady-state selection. • Boltzmann selection. • Scaling selection Roulette-Wheel Selection Roulette: the classical selection operator for generational GA as described by Goldberg. Each member of the pool is assigned space on a roulette wheel proportional to its tness. The members with the greatest tness have the highest probability of selection. This selection technique works only for a GA which maximizes its objective function. Concept : the chance of an individual's being selected is proportional to its tness, greater or less than its competitors' tness. The Probability of parenthood is proportional to tness.The wheel is spun until two parents are selected.The two parents create one ospring.The proce ss is repeated to create a new population for the next generation. Roulette wheel selection has problems if the tness changes by orders of magnitude.If two individuals have a much higher tness, they could be the parents for every child in the next generation. 11
  • 12. Reason Not to Use the Roulette Wheel • If the tness value for all individuals is very close, the parents will be chosen with equal probability, and the function will cease to optimize. • Roulette selection is very sensitive to the form of the tness function and generally requires modications to work at all. Boltzmann Selection Simulated annealing is a method used to minimize or maximize a function. This method simulates the process of slow cooling of molten metal to achieve the minimum function value in a minimum function value in a minimization prob- lem.The cooling phenomena is simulated by controlling a temprature like pa- rameter introduced with the concept of Boltzmann probability distribution.The system in thermal equilibrium at a temperature T has its energy distribution based on the probability dened by P(E) =exp(-E/KT) where K is Boltzzmann constant This expression suggest that a system at a higher temperature has almost uniform probability at any energy state, but at lower temperature it has a small probability of being at ahigher energy state. Crossover Two parents produce two ospring .There is a chance that the chromosomes of the two parents are copied unmodied as ospring .There is a chance that the chromosomes of the two parents are randomly recombined (crossover) to form ospring .Generally the chance of crossover is between 0.6 and 1.0 .Crossover is usually the primary operator with mutation serving only as a mechanism to 12
  • 13. introduce diversity in the population. However, there are a number of crossover operators that have been used on binary and real-coded GAs.There are various crossover 1.Single-point Crossover 2.Double-point Crossover 3.Multiple-point Crossover 4.Uniform Crossover 5.Matrix Crossover 6.Random Crossover 7.Permutation-based Crossover 8.Partial mapped crossover 9.Ordered crossover(OX) 10.Position based crossover(OX) 11.Ordered based crossover 12.Cycle crossover(CX) 13.Sub-tour exchange crossover 14.Heuristic crossover 1.Single-point Crossover Given two parents, single-point crossover will generate a cut-point and re- combines the rst part of rst parent with the second part of the second parent to create one ospring. Single-point crossover then recombines the second part of the rst parent with the rst part of the second parent to create a second ospring. + Parents 0 1 1 1 1 1 1 1 1 1 Children 1 1 1 1 1 0 1 1 1 1 2.Double-point Crossover Two-Point crossover is very similar to single-point crossover except that two cut-points are generated instead of one. 13
  • 14. Parents 0 1 1 0 1 1 1 1 1 1 Children 1 1 1 1 1 0 1 1 0 1 3.Multiple-point Crossover In the multiple point crossover operation,the crossover opertion takes placeat even and odd numbered sites.In the case of even numbered crossover sites the chromosomal string is viewed as a ring with no begning and end and crossover site are selected around the circle at random.In case of odd numbered crossover site,the crossover point is choosen at the begining of the string. Parents 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 Children 0 0 1 0 0 1 1 1 0 0 1 1 0 1 1 0 0 0 1 1 4.Uniform Crossover In Uniform Crossover, a value of the rst parent's gene is assigned to the rst ospring and the value of the second parent's gene is to the second ospring with probability 0.5. With probability 0.5 the value of the rst parent's gene is assigned to the second ospring and the value of the second parent's gene is assigned to the rst ospring. Parents 1 1 1 0 0 1 0 0 1 1 0 1 0 1 1 1 Children 1 1 0 0 0 1 0 1 1 1 1 1 0 1 1 0 5.Matrix Crossover It is used in two dimensional array .Here the rows columns of the crossover sites select randomly.Thus two crossover sites forms a three layer matrix.Then we can select any region. Parents 1 0 1 1 0 1 0 1 1 0 0 1 1 0 0 1 1 0 14
  • 15. Children 0 0 1 1 0 1 0 1 0 1 0 1 1 0 0 1 1 1 6..Random Crossover Random Crossover creates ospring randomly within a hyper-rectangle de ned by the parent points. There are two type : i).Flat crossover: An ospring is produced by uniformly picking a value for each gene from a range of values correspond to the parent node . ii).Blend crossover: which incorporates more variance by picking values that lies between 2 points containing the 2 parents . 7.Permutation-based Crossover Permutation-based Crossover is developed mainly for combinatorial opti mization problem such as the travelling salesman problem,machine schedu ling,resource allocation etc.These operators function based on two approaches: i).Canonical approach :It is the extention of Double-point Crossover Mul tiple point Crossover of binary string .it is based on blind random mechanism. There is no guarantee that an ospring produced by this method is better than its parents . ii).Heuristic approach :Application of Heuristic in crossover tends to generate improved ospring . 8.Partial mapped crossover Goldberg and Lingle proposed this method. Adopts a repairing procedure known as relationship mapping in order to prevent illegal duplication of genes. Extension to double point crossover. Figure shows the two substrings are se- lected and exchanged between the parent chromosomes. Parents 1 2 4 6 5 3 9 8 7 3 8 5 1 2 9 6 4 7 Mapping Relationship 93 254 16 Children 6 4 5 1 2 9 3 8 7 9 8 4 6 5 3 1 2 7 15
  • 16. 9.Ordered crossover(OX) Davis proposed this method,which is an extension of partial mapped crosover with a dierent repairing procedure.Asubstring is selected reandomly from one of the parents and copied exactly to the same positions in the child string. Parents 4 5 2 1 3 7 8 6 9 5 7 3 2 1 4 6 8 9 Children 5 2 4 1 3 7 8 6 9 5 3 7 2 1 4 6 8 9 12.Cycle crossover(CX) Oliver,Smith and Holland proposed this crossover. The nodes to be selected from parent are dened by a cycle according to the corresponding position be- tween the parents. Parents 1 Parent 2 Child1 Parent 2 Parent 1 Child2 4 5 2 1 3 8 7 6 9 5 7 3 2 1 4 6 8 9 4 5 3 2 1 8 7 6 9 5 7 3 2 1 4 6 8 9 4 5 2 1 3 8 7 6 9 5 7 2 1 3 4 6 8 9 Cycle=457684 Cycle=548675 14.Heuristic crossover A random node is picked at the start then its shortest edge that does not lesd to a cycle is selected. If two edges lead to a cycle,then a random node is picked up again and the above process is continued till all the nodes are checked and a new ospring is produced. This approavh is very ecient compared to other permutation operator. MutationMutation is a genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. There are three type of mutation: 1.Boundary Mutation 2.Uniform Mutation 3.Non-uniform Mutation 16
  • 17. 1.Uniform Mutation A mutation operator that replaces the value of the chosen gene with a uni- form random value selected between the user-specied upper and lower bounds . This mutation operator can only be used for integer and oat genes. Let consider a chromosome x t =[x1, x2, ......xm].A random number is selected such that ,k [1,n] then ospring xt+1 = x1, xk, ..............xm is produced. Xk is the random value generated from range [xl k,xu m] 2.Boundary Mutation This mutation operator replaces the genome with either lower or upper bound randomly. This can be used for integer and oat genes. The replacement of gene X'k by either x l k,x u m 3.Non-uniform Mutation The probability that amount of mutation will go to 0 with the next gener- ation is increased by using non-uniform mutation operator. It keeps the popu- lation from stagnating in the early stages of the evolution. It tunes solution in later stages of evolution. This mutation operator can only be used for integer and oat genes. X'L= XL + (t, Xu L − Xk ),if the random digit is 0 X'L= Xt − (t, Xt − XL k ),if the random digit is 1 One's Complement Operator This Operator is Known as unary operator(~). It causes the bits of its operand to be inverted. 1 0 1 1 0 0 1 1 output 0 1 0 0 1 1 0 0 Logical bitwise operator This operator can furthur classied in to 1. bitwise AND 2. bitwise exclusive(X)-OR 3. bitwise OR. 17
  • 18. Two Bitwise Operator used X Y AND XOR OR 0 0 0 0 0 0 1 0 1 1 1 0 0 1 1 1 1 1 0 1 AND Operator OR Operator Px 0 1 0 1 1 1 0 PX 0 1 0 1 1 1 0 qx 1 1 1 1 0 1 1 qx 0 1 1 1 0 1 1 Cxy 0 1 0 1 0 1 0 Cxy 0 1 1 1 1 1 1 Shift Operator Shift operator 2 types. 1. Shift Left() operator. 2. Shift Right() operator. Shift Left Operator 1 0 0 1 1 1 0 0 1 1 1 0 Shift Right Operator 1 0 0 1 1 1 0 1 0 0 1 1 Masking Operator Transform a given bit pattern in to another form with the help of Logical bitwse operation. There is a mask operand is used. px 1 0 1 0 1 1 1 0 masking table 0 1 1 1 0 0 1 1 o/p 0 0 1 0 0 0 1 0 Mutation schemas are: 1. Inversion. 2. Insertion. 3. Displacement. 4. Reciprocal Exchange Mutation. Insertion. Node selected random and inserted random. px 2 5 4 9 6 8 1 3 cx 2 6 4 9 5 8 1 3 Displacement Select substring random insert it in random position. 18
  • 19. px 2 5 4 9 6 8 1 3 cx 2 6 4 5 9 8 1 8 Inversion. Two position randomly selected. Substring between these two position is inverted. px 2 5 4 9 6 8 1 3 cx 2 5 8 6 9 4 1 3 Reciprocal Exchange Mutation. Select two position at random then swap the nodes in these position. px 2 5 4 9 6 8 1 3 cx 2 6 4 9 5 8 1 3 Applications of Genetic Algorithms 1. Traveling Salesman Problem 2. Decoding A Secret Message 3. Robot Trajectory Planning 4. Optimizing Articial Neural Nets 1. 1.Traveling Salesman Problem The goal is to nd the shortest route for a salesperson to take in visiting N cities The cost function for the simplest form of the problem is just the distance traveled by the salesperson for the given ordering (xn, yn), n = 1,..., where (xn, yn) are the coordinates of the nth city visited cost= N n=0 (xn − xn+1)2 +(yn − yn+1) 2 Let N = 13 cities and put the starting and ending point at the origin, so (x0, y0) = (xn+1, yn+1) = (0, 0)= starting and ending point .There are a total of 13!/2 = 3.1135 x 109 possible combinations to check. Assumes that all the cities lie in a rectangle and the minimum distance is 14 .The GA parameters Npop = 400 Nkeep = 200 Mutation Rate = 0.04 19
  • 20. Here we use Permutation Encoding Eg : [ 2 5 3 1 4 6 ] Crossover Operator is a variation of Cyclic Crossover parent 1 : [ 1 5 3 6 4 2 ] parent 2 : [ 4 2 1 3 6 5 ] Randomly select a location to exchange ospring 1 : [ 4 5 3 6 4 2 ] ospring 2 : [ 1 2 1 3 6 5 ] Next Step ospring 1 : [ 4 5 3 6 4 2 ] ospring 2 : [ 1 2 1 3 6 5 ] ospring 1 : [ 4 5 3 6 6 2 ] ospring 2 : [ 1 2 1 3 4 5 ] ospring 1 : [ 4 5 3 3 6 2 ] ospring 2 : [ 1 2 1 6 4 5 ] ospring 1 : [ 4 5 1 3 6 2 ] ospring 2 : [ 1 2 3 6 4 5 ] . The process iterates until we return to the rst exchanged site .The mutation operator randomly chooses a string, selecting two random sites within that string, and exchanges the integers at those sites Convergence of the genetic algorithm . If we doesn't have an obvious minimum path, then The optimal solution will have no crossing paths So plot the solution and check 20
  • 21. Decoding A Secret Message Uses a GA to break a secret code. A message consisting of letters and spaces is encoded by randomly changing one letter to another letter. If the message uses every letter in the alphabet plus a space, then there are a total of 27! possible codes, with only one being correct .If the message uses S symbols, then there are 27! - S! possible encodings that work. A chromosome consists of 27 genes with unique values from 1 to 27 A 1 corresponds to a space and 2 through 27 correspond to the letters of the alphabet. Letters and spaces in the message receive the appropriate numeric values. The cost is calculated by subtracting the guess of the message from the known message, taking the absolute value, and summing: cost= N n=t[message(n) − guess(n)] Optimizing Articial Neural Nets In training process of Neural Networks, the weights and bias values are optimized to produce the desired output. GA to compute the optimum weights and biases To do this, we used the two-layer neural network with log-sigmoid transfer functions .The goal is to compute the optimum weights and biases of the ANN f(x)=12/x 2 cos(x)+1/x for 1=x=5 using GA 21
  • 22. The GA chromosome is made up of potential weights and biases .The GA cost function computes the mean square dierence between the current guess of the function and the exact function evaluated at specic points in x.The function computed from the neural network with GA hybrid training matches the known curve quite well. Other Applications • Locating An Emergency Response • Unit Stealth Design Building • Dynamic Inverse Models • Combining Gas With Simulationsair Pollution Receptor Modeling • Antenna Array Design 22
  • 23. Support Vector machine one of the most well studied and widely used learning algorithms for binary classication Extensions of SVMs exist for a variety of other learning problems, including regression, multiclass classication, ordinal regression, ranking, struc- tured prediction, and many others. Similar to perceptrons they aim to nd a hyper plane that linearly separates data points belong to dierent classes In addition SVMs aim to nd the hyper plane that is least likely to overt the training data. Support Vector Machines are based on the concept of decision planes that dene decision boundaries. A decision plane is one that separates between a set of objects having dierent class memberships. A schematic example is shown in the illustration below. In this example, the objects belong either to class GREEN or RED. The separating line denes a boundary on the right side of which all objects are GREEN and to the left of which all objects are RED. Any new object (white circle) falling to the right is labeled, i.e., classied, as GREEN (or classied as RED should it fall to the left of the separating line). We are given a set of n points (vectors) : such that is a vector of length m , and each belong to one of two classes we label them by +1 and -1. -So our training set is: (x1,y1),(x2,y2),.........(xn,yn)∀ixi Rm , yi {+1, −1}We want to nd a separating hyperplane w.x + b = 0 that separates these points into the two classes. The positives (class +1) and The negatives (class -1). (Assuming that they are linearly separable) 23
  • 24. Separating Hyperplane Suppose we choose the hypreplane (seen below) that is close to some sample xi.Now suppose we have a new point x that should be in class -1 and is close to xi . Using our classication function f (x) this point is misclassied or Poor generalization. So Hyperplane should be as far as possible from any sample point.As shown in the below gure.This way a new data that is close to the old samples will be classied correctly.Which perform good generalization. 24
  • 25. Linearly Separable Data - Hard Margin SVM It is the SVM idea is to maximize the distance between The hyperplane and the closest sample point.Shown in below gure. In the optimal hyper- plane: Distance to the closest negative point = Distance to the closest positive point. SVM's goal is to maximize the Margin which is twice the distance d between the separating hyperplane and the closest sample. Why it is the best? • Robust to outliners as we saw and thus strong generalization ability. • It proved itself to have better performance on test data in both practice and in theory. Support vectors are the samples closest to the separating hyperplane. 25
  • 26. A training sample S = ((x1; y1); : : : ; (xm; ym)) 2 (R { -1; 1}) is said to be linearly separable if there exists a linear classier h(x) = sign(w.x + b) which classies all examples in S correctly, i.e. for which yi(w.xi + b) 0 {1; : : : ;m}g. For example, Figure (left) shows a training sample in R2 that is linearly separable, together with two possible linear classiers that separate the data correctly. Fig : Left: A linearly separable data set, with two possible linear classiers that separate the data. Blue circles represent class label 1 and red crosses 1; the arrow represents the direction of positive classication. Right: The same data set and classiers, with margin of separation shown. Although both classiers separate the data, the distance or margin with which the separation is achieved is dierent; this is shown in Figure 1 (right). The SVM algorithm selects the maximum margin classier, i.e. the linear classier that separates the training data with the largest margin. More precisely, dene the (geometric) margin of a linear classierh (x) = sign (w.x + b) on an example (xi, yi) Rn X {−1, 1} εRn X{−1, 1} as γi= yi(w.Xi+b) W where ||w|| denotes the Euclidean norm of w. (Note that the distance of xi from the hyperplane [w.x + b = 0] Let us look at our decision boundary :This separating hyperplane equation is : wt x + b = 0. where w Rm ,x Rm ,b R.Note that w w is orthogonal to the 26
  • 27. separating hyperplane and its length is 1. Let γibe the distance between the hyperplane and Some training example xi . So γi is the length of the segment from p to xi. Learning linear SVM It is convenient to represent classes by +1 and -1 using y = 1; if wx+b 0 , -1; if wx+b 0 w can be rescaled such that for all points x lying on the respective boundaries it holds that wx+b = 1 or wx+b = -1 These points are called the support vectors The task of learning a linear SVM consists of estimating the parameters w and b The rst criterion is that all points in the training data must be classied correctly: w.xi + b ≥ 1 if yi = 1 w.xi+b ≤ -1 if yi = -1 This can be re-written as: yi(w.xi+b) ≥ 1 for 1≤ i ≤ N 27