SlideShare a Scribd company logo
DESIGN AND ANALYSIS
OF ALGORITHMS
DYNAMIC PROGRAMMING TECHNIQUE
• Coin changing problem
• Computing a Binomial Coefficient
• Floyd‘s algorithm
• Multi stage graph
• Optimal Binary Search Trees
• Knapsack Problem and Memory functions.
Contents
Dynamic Programming is a general algorithm design technique
for solving problems defined by recurrences with overlapping
subproblems
• Invented by American mathematician Richard Bellman in the 1950s to
solve optimization problems and later assimilated by CS
• “Programming” here means “planning”
• Main idea:
-set up a recurrence relating a solution to a larger instance to
solutions of some smaller instances
- solve smaller instances once
-record solutions in a table
-extract solution to the initial instance from that table
Dynamic Programming
Computing the nth Fibonacci number using bottom-up iteration and recording
results:
F(0) = 0
F(1) = 1
F(2) = 1+0 = 1
…
F(n-2) =
F(n-1) =
F(n) = F(n-1) + F(n-2)
Efficiency:
- time
- space
Fibonacci numbers
• Input: A list of integers representing coin
denominations, plus another positive integer
representing an amount of money.
• Output: A minimal collection of coins of the given
denominations which sum to the given amount.
• Give change for amount n using the minimum
number of coins of denominations d1<d2 < . . .<dm.
Coin-changing problem
Example : amount n = 6 and coin
denominations 1, 3, and 4.
Coin-changing problem
.
Coin-changing problem
Binomial coefficient, denoted c(n,k), is the number of
combinations k elements from an n-element set (0 ≤ k ≤ n).
The binomial formula is:
(a+b)n = c(n,0)an + …. + c(n,i)a n-i b i + … + c(n,n)bn
c(n,k) = c(n-1,k-1) + c(n-1,k) for n>k>0
and
c(n,0)= c(n,n) = 1
COMPUTING A BINOMIAL COEFFICIENT
c(n,k) = c(n-1,k-1) + c(n-1,k) for n>k>0
and c(n,0)= c(n,n) = 1
COMPUTING A BINOMIAL COEFFICIENT
COMPUTING A BINOMIAL COEFFICIENT
Analysis:
Time efficiency: Θ(nk)
Space efficiency: Θ(nk)
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack
Recursive solution?
What is smaller problem?
How to use solution to smaller in solution to larger
Table?
Order to solve?
Initial conditions?
Knapsack Problem by DP
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15
Knapsack Problem by DP (example)
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack
Consider instance defined by first i items and capacity j (j  W).
Let V[i,j] be optimal value of such instance. Then
max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi  0
V[i,j] =
V[i-1,j] if j- wi < 0
Initial conditions: V[0,j] = 0 and V[i,0] = 0
Knapsack Problem by DP
Problem: Given n keys a1 < …< an and probabilities p1 ≤ … ≤
pn
searching for them, find a BST with a minimum
average number of comparisons in successful search.
Since total number of BSTs with n nodes is given by
C(2n,n)/(n+1), which grows exponentially, brute force is
hopeless.
Optimal Binary Search Trees
Example: What is an optimal BST for keys A, B, C, and D with
search probabilities 0.1, 0.2, 0.4, and 0.3,
respectively?
Optimal Binary Search Trees
Example: key A B C D
probability 0.1 0.2 0.4 0.3
B
A
C
D
Optimal BST
Expected number of comparisons for optimal BST:
1*0.4 + 2*(0.2+0.3) + 3*(0.1) = 0.4+1.0+0.3=1.7
Non-optimal BST – Swap A and B:
1*0.4 + 2*(0.1+0.3) + 3*(0.2) = 0.4+0.8+0.6=1.8
Optimal Binary Search Trees
B
A
C
D
Example: key A B C D
probability 0.1 0.2 0.4 0.3
After simplifications, we obtain the recurrence for C[i,j]:
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps for 1 ≤ i ≤ j ≤ n
C[i,i] = pi for 1 ≤ i ≤ j ≤ n
DP for Optimal BST Problem (cont.)
goal
0
0
C[i,j]
0
1
n+1
0 1 n
p 1
p2
n
p
i
j
The left table is filled using the recurrence
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps , C[i,i] = pi
i ≤ k ≤ j s = i
The right saves the tree roots, which are the k’s that give the
minima
Example: key A B C D
probability 0.1 0.2 0.4 0.3
j
0 1 2 3 4
1 0 .1 .4 1.1 1.7
2 0 .2 .8 1.4
3 0 .4 1.0
4 0 .3
5 0
0 1 2 3 4
1 1 2 3 3
2 2 3 3
3 3 3
4 4
5
B
A
C
D
Contents
Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking
advantage of monotonicity of entries in the
root table, i.e., R[i,j] is always in the range
between R[i,j-1] and R[i+1,j]
Space efficiency: Θ(n2)
Method can be expanded to include unsuccessful searches
Analysis DP for Optimal BST Problem
Problem: In a weighted (di)graph, find shortest paths between
every pair of vertices
Same idea: construct solution through series of matrices D(0), …,
D (n) using increasing subsets of the vertices allowed
as intermediate
Example:
Floyd’s Algorithm: All pairs shortest paths
3
4
2
1
4
1
6
1
5
3
Dynamic Programming.pptx
On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among
1,…,k as intermediate
D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]}
Floyd’s Algorithm (matrix generation)
Floyd’s Algorithm (example)
0 ∞ 3 ∞
2 0 ∞ ∞
∞ 7 0 1
6 ∞ ∞ 0
D(0) =
0 ∞ 3 ∞
2 0 5 ∞
∞ 7 0 1
6 ∞ 9 0
D(1) =
0 ∞ 3 ∞
2 0 5 ∞
9 7 0 1
6 ∞ 9 0
D(2) =
0 10 3 4
2 0 5 6
9 7 0 1
6 16 9 0
D(3) =
0 10 3 4
2 0 5 6
7 7 0 1
6 16 9 0
D(4) =
3
1
3
2
6 7
4
1 2
3
1
3
2
6 7
4
1 2
0 ∞ 3 ∞
2 0 ∞ ∞
∞ 7 0 1
6 ∞ ∞ 0
D(0) =
1 2 4
3
1
2
3
4
D(1) =
1 2 4
3
1
2
3
4
D(2) =
1 2 4
3
1
2
3
4
0 10 3 4
2 0 5 6
9 7 0 1
6 16 9 0
D(3) =
Floyd’s Algorithm (pseudocode and analysis)
Time efficiency: Θ(n3)
Space efficiency: Matrices can be written over their predecessors
Note: Shortest paths themselves can be found, too
MULTISTAGE GRAPHS
• A multistage graph G = (V, E)is a directed graph in which
the vertices are partitioned into k ≥ 2 disjoint sets Vi, 1 ≤ i ≤
k.
• If <u,v> is an edge in E, then u∈ Vi and v∈ Vi+1 for some i,
1≤i ≤ k. The set V1 and Vk are such that |V1| = |Vk| = 1.
• Let s and t respectively, be the vertices in V1 and Vk
• The multistage graph problem is to find a minimum
costpath from s to t.
MULTISTAGE GRAPHS
Dynamic Programming.pptx
Dynamic Programming.pptx
MULTISTAGE GRAPHS ( Forward Approach)
MULTISTAGE GRAPHS (Backward approach)
MULTISTAGE GRAPHS( Forward Approach)
 Dynamic programming approach:
 Cost(1, S) = min{1+Cost(2, A),
2+Cost(2, B), 5+Cost(2, C)}
S T
2
B
A
C
1
5
d(C, T)
d(B, T)
d(A, T)
TRAVELLING SALESMAN PROBLEM
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
Cost( i, s)=min{Cost(j, s-(j))+d[ i, j]}
S = Φ
Cost(2,Φ,1)=d(2,1)=5
Cost(3,Φ,1)=d(3,1)=6
Cost(4,Φ,1)=d(4,1)=8
S = 1
Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15
Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18
Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18
Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20
Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15
Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13
S = 2
Cost(2,{3,4},1)=min{d[2,3]+Cost(3,{4},1),
d[2,4]+Cost(4,{3},1)}
= min {9+20,10+15} = min{29,25} = 25
Cost(3,{2,4},1)=min{d[3,2]+Cost(2,{4},1),
d[3,4]+Cost(4,{2},1)}
=min {13+18,12+13} = min {31, 25} = 25
Cost(4,{2,3},1)=min{d[4,2]+Cost(2,{3},1),
d[4,3]+Cost(3,{2},1)}
=min {8+15,9+18} = min {23,27} =23
S = 3
Cost(1,{2,3,4},1)=min{ d[1,2]+Cost(2,{3,4},1),
d[1,3]+Cost(3,{2,4},1), d[1,4]+cost(4,{2,3},1)}
=min{10+25, 15+25, 20+23} =
min{35,40,43}=35

More Related Content

PPT
Introduction to data structures and Algorithm
PPT
Divide and Conquer
PPT
Knapsack problem and Memory Function
PDF
Solution of matlab chapter 2
PPT
POST’s CORRESPONDENCE PROBLEM
PPTX
Minimization of DFA.pptx
PPTX
Newton's Backward Interpolation Formula with Example
PPTX
The n Queen Problem
Introduction to data structures and Algorithm
Divide and Conquer
Knapsack problem and Memory Function
Solution of matlab chapter 2
POST’s CORRESPONDENCE PROBLEM
Minimization of DFA.pptx
Newton's Backward Interpolation Formula with Example
The n Queen Problem

What's hot (20)

PPTX
Greedy Algorithm - Knapsack Problem
PPTX
Turing machine
PPTX
State space search
PDF
Neural Networks: Least Mean Square (LSM) Algorithm
PPTX
Equivalence of DFAs and NFAs.pptx
PPTX
MATLAB - Arrays and Matrices
PPT
Hashing PPT
PPT
Time complexity
PPTX
sum of subset problem using Backtracking
PPT
Master method
PPTX
FiniteAutomata_anim.pptx
PPTX
Turing Machine
PPT
Asymptotic Notation and Complexity
PPT
Primitive Recursive Functions
PPT
PPTX
Basic Blocks and Flow Graphs
PPTX
Python-Inheritance.pptx
PPT
Recurrences
PPTX
Divide and Conquer - Part 1
PPTX
Stability Analysis of Discrete System
Greedy Algorithm - Knapsack Problem
Turing machine
State space search
Neural Networks: Least Mean Square (LSM) Algorithm
Equivalence of DFAs and NFAs.pptx
MATLAB - Arrays and Matrices
Hashing PPT
Time complexity
sum of subset problem using Backtracking
Master method
FiniteAutomata_anim.pptx
Turing Machine
Asymptotic Notation and Complexity
Primitive Recursive Functions
Basic Blocks and Flow Graphs
Python-Inheritance.pptx
Recurrences
Divide and Conquer - Part 1
Stability Analysis of Discrete System
Ad

Similar to Dynamic Programming.pptx (20)

PPT
Dynamic Programming for 4th sem cse students
PPTX
Dynamic Programming in design and analysis .pptx
PPT
dynamic-programming unit 3 power point presentation
PDF
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
PPTX
UNIT 4 Chapter 1 DYNAMIC PROGRAMMING.pptx
PPTX
unit-4-dynamic programming
PDF
Unit 4- Dynamic Programming.pdf
PPT
Dynamic pgmming
PPTX
Chapter 12 Dynamic programming.pptx
PPTX
DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE
DOC
Unit 3 daa
PPTX
AAC ch 3 Advance strategies (Dynamic Programming).pptx
PPTX
dynamic programming complete by Mumtaz Ali (03154103173)
PPTX
Dynamic Programming
PDF
03. dynamic programming
PDF
ADA Unit — 3 Dynamic Programming and Its Applications.pdf
PPTX
Design and Analysis of Algorithm-Lecture.pptx
PPTX
Algorithm Design Techiques, divide and conquer
Dynamic Programming for 4th sem cse students
Dynamic Programming in design and analysis .pptx
dynamic-programming unit 3 power point presentation
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
UNIT 4 Chapter 1 DYNAMIC PROGRAMMING.pptx
unit-4-dynamic programming
Unit 4- Dynamic Programming.pdf
Dynamic pgmming
Chapter 12 Dynamic programming.pptx
DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE
Unit 3 daa
AAC ch 3 Advance strategies (Dynamic Programming).pptx
dynamic programming complete by Mumtaz Ali (03154103173)
Dynamic Programming
03. dynamic programming
ADA Unit — 3 Dynamic Programming and Its Applications.pdf
Design and Analysis of Algorithm-Lecture.pptx
Algorithm Design Techiques, divide and conquer
Ad

Recently uploaded (20)

PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Updated Idioms and Phrasal Verbs in English subject
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Trump Administration's workforce development strategy
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
Complications of Minimal Access Surgery at WLH
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Lesson notes of climatology university.
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
History, Philosophy and sociology of education (1).pptx
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
Chinmaya Tiranga quiz Grand Finale.pdf
Updated Idioms and Phrasal Verbs in English subject
Weekly quiz Compilation Jan -July 25.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
Trump Administration's workforce development strategy
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Complications of Minimal Access Surgery at WLH
Anesthesia in Laparoscopic Surgery in India
Supply Chain Operations Speaking Notes -ICLT Program
Orientation - ARALprogram of Deped to the Parents.pptx
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Lesson notes of climatology university.
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
History, Philosophy and sociology of education (1).pptx
Microbial disease of the cardiovascular and lymphatic systems
Module 4: Burden of Disease Tutorial Slides S2 2025

Dynamic Programming.pptx

  • 1. DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING TECHNIQUE
  • 2. • Coin changing problem • Computing a Binomial Coefficient • Floyd‘s algorithm • Multi stage graph • Optimal Binary Search Trees • Knapsack Problem and Memory functions. Contents
  • 3. Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems • Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS • “Programming” here means “planning” • Main idea: -set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once -record solutions in a table -extract solution to the initial instance from that table Dynamic Programming
  • 4. Computing the nth Fibonacci number using bottom-up iteration and recording results: F(0) = 0 F(1) = 1 F(2) = 1+0 = 1 … F(n-2) = F(n-1) = F(n) = F(n-1) + F(n-2) Efficiency: - time - space Fibonacci numbers
  • 5. • Input: A list of integers representing coin denominations, plus another positive integer representing an amount of money. • Output: A minimal collection of coins of the given denominations which sum to the given amount. • Give change for amount n using the minimum number of coins of denominations d1<d2 < . . .<dm. Coin-changing problem
  • 6. Example : amount n = 6 and coin denominations 1, 3, and 4. Coin-changing problem
  • 8. Binomial coefficient, denoted c(n,k), is the number of combinations k elements from an n-element set (0 ≤ k ≤ n). The binomial formula is: (a+b)n = c(n,0)an + …. + c(n,i)a n-i b i + … + c(n,n)bn c(n,k) = c(n-1,k-1) + c(n-1,k) for n>k>0 and c(n,0)= c(n,n) = 1 COMPUTING A BINOMIAL COEFFICIENT
  • 9. c(n,k) = c(n-1,k-1) + c(n-1,k) for n>k>0 and c(n,0)= c(n,n) = 1 COMPUTING A BINOMIAL COEFFICIENT
  • 10. COMPUTING A BINOMIAL COEFFICIENT Analysis: Time efficiency: Θ(nk) Space efficiency: Θ(nk)
  • 11. Given n items of integer weights: w1 w2 … wn values: v1 v2 … vn a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack Recursive solution? What is smaller problem? How to use solution to smaller in solution to larger Table? Order to solve? Initial conditions? Knapsack Problem by DP
  • 12. Example: Knapsack of capacity W = 5 item weight value 1 2 $12 2 1 $10 3 3 $20 4 2 $15 Knapsack Problem by DP (example)
  • 13. Given n items of integer weights: w1 w2 … wn values: v1 v2 … vn a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack Consider instance defined by first i items and capacity j (j  W). Let V[i,j] be optimal value of such instance. Then max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi  0 V[i,j] = V[i-1,j] if j- wi < 0 Initial conditions: V[0,j] = 0 and V[i,0] = 0 Knapsack Problem by DP
  • 14. Problem: Given n keys a1 < …< an and probabilities p1 ≤ … ≤ pn searching for them, find a BST with a minimum average number of comparisons in successful search. Since total number of BSTs with n nodes is given by C(2n,n)/(n+1), which grows exponentially, brute force is hopeless. Optimal Binary Search Trees
  • 15. Example: What is an optimal BST for keys A, B, C, and D with search probabilities 0.1, 0.2, 0.4, and 0.3, respectively? Optimal Binary Search Trees Example: key A B C D probability 0.1 0.2 0.4 0.3 B A C D Optimal BST
  • 16. Expected number of comparisons for optimal BST: 1*0.4 + 2*(0.2+0.3) + 3*(0.1) = 0.4+1.0+0.3=1.7 Non-optimal BST – Swap A and B: 1*0.4 + 2*(0.1+0.3) + 3*(0.2) = 0.4+0.8+0.6=1.8 Optimal Binary Search Trees B A C D Example: key A B C D probability 0.1 0.2 0.4 0.3
  • 17. After simplifications, we obtain the recurrence for C[i,j]: C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps for 1 ≤ i ≤ j ≤ n C[i,i] = pi for 1 ≤ i ≤ j ≤ n DP for Optimal BST Problem (cont.) goal 0 0 C[i,j] 0 1 n+1 0 1 n p 1 p2 n p i j
  • 18. The left table is filled using the recurrence C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps , C[i,i] = pi i ≤ k ≤ j s = i The right saves the tree roots, which are the k’s that give the minima Example: key A B C D probability 0.1 0.2 0.4 0.3 j 0 1 2 3 4 1 0 .1 .4 1.1 1.7 2 0 .2 .8 1.4 3 0 .4 1.0 4 0 .3 5 0 0 1 2 3 4 1 1 2 3 3 2 2 3 3 3 3 3 4 4 5 B A C D
  • 20. Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking advantage of monotonicity of entries in the root table, i.e., R[i,j] is always in the range between R[i,j-1] and R[i+1,j] Space efficiency: Θ(n2) Method can be expanded to include unsuccessful searches Analysis DP for Optimal BST Problem
  • 21. Problem: In a weighted (di)graph, find shortest paths between every pair of vertices Same idea: construct solution through series of matrices D(0), …, D (n) using increasing subsets of the vertices allowed as intermediate Example: Floyd’s Algorithm: All pairs shortest paths 3 4 2 1 4 1 6 1 5 3
  • 23. On the k-th iteration, the algorithm determines shortest paths between every pair of vertices i, j that use only vertices among 1,…,k as intermediate D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]} Floyd’s Algorithm (matrix generation)
  • 24. Floyd’s Algorithm (example) 0 ∞ 3 ∞ 2 0 ∞ ∞ ∞ 7 0 1 6 ∞ ∞ 0 D(0) = 0 ∞ 3 ∞ 2 0 5 ∞ ∞ 7 0 1 6 ∞ 9 0 D(1) = 0 ∞ 3 ∞ 2 0 5 ∞ 9 7 0 1 6 ∞ 9 0 D(2) = 0 10 3 4 2 0 5 6 9 7 0 1 6 16 9 0 D(3) = 0 10 3 4 2 0 5 6 7 7 0 1 6 16 9 0 D(4) = 3 1 3 2 6 7 4 1 2
  • 25. 3 1 3 2 6 7 4 1 2 0 ∞ 3 ∞ 2 0 ∞ ∞ ∞ 7 0 1 6 ∞ ∞ 0 D(0) = 1 2 4 3 1 2 3 4 D(1) = 1 2 4 3 1 2 3 4 D(2) = 1 2 4 3 1 2 3 4 0 10 3 4 2 0 5 6 9 7 0 1 6 16 9 0 D(3) =
  • 26. Floyd’s Algorithm (pseudocode and analysis) Time efficiency: Θ(n3) Space efficiency: Matrices can be written over their predecessors Note: Shortest paths themselves can be found, too
  • 27. MULTISTAGE GRAPHS • A multistage graph G = (V, E)is a directed graph in which the vertices are partitioned into k ≥ 2 disjoint sets Vi, 1 ≤ i ≤ k. • If <u,v> is an edge in E, then u∈ Vi and v∈ Vi+1 for some i, 1≤i ≤ k. The set V1 and Vk are such that |V1| = |Vk| = 1. • Let s and t respectively, be the vertices in V1 and Vk • The multistage graph problem is to find a minimum costpath from s to t.
  • 31. MULTISTAGE GRAPHS ( Forward Approach)
  • 33. MULTISTAGE GRAPHS( Forward Approach)  Dynamic programming approach:  Cost(1, S) = min{1+Cost(2, A), 2+Cost(2, B), 5+Cost(2, C)} S T 2 B A C 1 5 d(C, T) d(B, T) d(A, T)
  • 34. TRAVELLING SALESMAN PROBLEM 1 2 3 4 1 0 10 15 20 2 5 0 9 10 3 6 13 0 12 4 8 8 9 0 Cost( i, s)=min{Cost(j, s-(j))+d[ i, j]}
  • 35. S = Φ Cost(2,Φ,1)=d(2,1)=5 Cost(3,Φ,1)=d(3,1)=6 Cost(4,Φ,1)=d(4,1)=8 S = 1 Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15 Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18 Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18 Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20 Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15 Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13 S = 2 Cost(2,{3,4},1)=min{d[2,3]+Cost(3,{4},1), d[2,4]+Cost(4,{3},1)} = min {9+20,10+15} = min{29,25} = 25
  • 36. Cost(3,{2,4},1)=min{d[3,2]+Cost(2,{4},1), d[3,4]+Cost(4,{2},1)} =min {13+18,12+13} = min {31, 25} = 25 Cost(4,{2,3},1)=min{d[4,2]+Cost(2,{3},1), d[4,3]+Cost(3,{2},1)} =min {8+15,9+18} = min {23,27} =23 S = 3 Cost(1,{2,3,4},1)=min{ d[1,2]+Cost(2,{3,4},1), d[1,3]+Cost(3,{2,4},1), d[1,4]+cost(4,{2,3},1)} =min{10+25, 15+25, 20+23} = min{35,40,43}=35