SlideShare a Scribd company logo
Dynamic Programming
Module 4
Shiwani Gupta 1
Dynamic Programming
 Dynamic Programming is applied to optimization problems.
 Solves problems with overlapping subproblems.
 Each subproblem solved only once and result recorded in a table.
Shiwani Gupta 2
Divide n Conquer vs Dynamic
Programming
 Subproblems are independent
 Recomputations performed
 Less Efficient due to rework
 Recursive Method (Top Down
Approach of problem Solving)
 Splits its input at specific
deterministic points usually in
middle
 Subproblems are overlapping
 No need to recompute
 More Efficient
 Iterative Method (Bottom Up
Approach of problem Solving)
 Splits its input at every possible
split point
Shiwani Gupta 3
Greedy vs Dynamic Programming
 Used to obtain optimum solution
 Picks optimum solution from a set
of feasible solutions
 Optimum selection is without
revising previously generated
selections
 Only one decision sequence is
ever generated
 No guarantee of getting optimum
solution
 Also obtains optimum solution
 No special set of feasible
solutions
 Considers all possible
sequences to obtain possible
solution
 Many decision sequences may
be generated
 Guarantees optimal solution
using Principle of optimality
In an optimal sequence of decisions or choices, each subsequence must
also be optimal. Shiwani Gupta 4
Principle of optimality
 Principle of optimality: Suppose that in solving a problem, we have to
make a sequence of decisions D1, D2, …, Dn. If this sequence is
optimal, then the last k decisions, 1  k  n must be optimal.
 e.g. the shortest path problem
If i, i1, i2, …, j is a shortest path from i to j, then i1, i2, …, j must be a
shortest path from i1 to j
Shiwani Gupta 5
Applications of Dynamic Programming
 Multistage Graphs
 Single Source Shortest Path
 All Pair Shortest Path
 Optimal Binary Search Tree
 0/1 Knapsack Problem
 Traveling Salesman Problem
 Longest Common Subsequence
 Flow Shop Scheduling
Shiwani Gupta 6
Multistage Graph
 Multistage Graph G=(V,E) is a directed graph whose vertices are
partitioned into k stages, k ≥ 2
 To find a shortest path from source to sink
 Apply the greedy method :
the shortest path from S to T : 1 + 2 + 5 = 8
S A B T
3
4
5
2 7
1
5 6
Shiwani Gupta 7
The shortest path in Multistage Graphs
The greedy method can not be applied to this case:
(S, A, D, T) 1+4+18 = 23.
The real shortest path is: (S, C, F, T) 5+2+2 = 9.
We obtain min. path at each current stage by considering path length
of each vertex obtained in earlier stage.
S T
13
2
B E
9
A D
4
C F
2
1
5
11
5
16
18
2
Shiwani Gupta 8
 Dynamic programming approach (forward approach):
 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
S T
2
B
A
C
1
5
d(C, T)
d(B, T)
d(A, T)
A
T
4
E
D
11
d(E, T)
d(D, T)
 d(A,T) = min{4+d(D,T), 11+d(E,T)} = min{4+18, 11+13} = 22.
Forward approach
(backward reasoning)
Shiwani Gupta 9
 d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.
 d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
 The above way of reasoning is called backward reasoning.
B T
5
E
D
F
9
16
d(F, T )
d(E, T )
d(D, T)
Dynamic programming approach
Shiwani Gupta 10
Backward approach
(forward reasoning)
 d(S,T) = min{d(S, D)+d(D, T),d(S,E)+d(E,T), d(S, F)+d(F, T)}
= min{d(S, D)+18,d(S,E)+13, d(S, F)+2}
 d(S,D) = min{d(S, A)+d(A, D),d(S, B)+d(B, D)}
= min{d(S, A)+4,d(S, B)+9}
Shiwani Gupta 11
S T
13
2
B E
9
A D
4
C F
2
1
5
11
5
16
18
2
Backward approach
(forward reasoning)
 d(S,E) = min{d(S, A)+d(A, E),d(S, B)+d(B, E)}
= min{d(S, A)+11,d(S, B)+5}
 d(S,F) = min{d(S, B)+d(B, F),d(S, C)+d(C, F)}
= min{d(S, B)+16,d(S, C)+2}
 d(S,A) = 1; d(S, B) = 2;d(S, C) = 5
d(S,D)= min{ 1+4, 2+9 } = 5 d(S,E)= min{ 1+11, 2+5 } = 7
d(S,F)= min{ 2+16, 5+2 } = 7 d(S,T) = min{ 5+18, 7+13, 7+2 } = 9
Shiwani Gupta 12
S T
13
2
B E
9
A D
4
C F
2
1
5
11
5
16
18
2
Shiwani Gupta 13
Shortest Path Problems
 Single-source (single-destination). Find a shortest path
from a given source (vertex s) to each of the vertices.
 Dijkstra’s Algorithm - Greedy
 Bellman-Ford Algorithm – Dynamic Programming
 Single-pair. Given two vertices, find a shortest path
between them. Solution to single-source problem solves
this problem efficiently, too.
 All-pairs. Find shortest-paths for every pair of vertices.
Dynamic programming algorithm.
 Floyd Warshall’s Algorithm – Dynamic Programming
 Applications
 static/dynamic network routing
 robot motion planning
 map/route generation in traffic
Shiwani Gupta 14
Bellman-Ford Algorithm
 Dijkstra’s doesn’t work when there are negative edges
 Bellman-Ford algorithm detects negative cycles (returns false) or
returns the shortest path-tree
Algorithm BellmanFord (vertices, edges, source)
for(each vertex v) //loop to initialize graph
if (v is source) then
v.distance ← 0
else
v.distance ← infinity
v.pred ← null
for (i← 1 to tot_vertices-1) //O(n)
for (each edge uv) //O(nm)
u ← uv.source
v ← uv.destination Shiwani Gupta 15
Bellman-Ford Algorithm
//newly obtained min dist
if( v.distance > u.distance + uv.weight ) then
v.distance = u.distance + uv.weight //relax edge
v.pred ← u
for (each edge uv)
u ← uv.source
v ← uv.destination
if( v.distance > u.distance + uv.weight ) then
write(“Graph has negative weights”)
return false
Time Complexity: O (nm)
Shiwani Gupta 16
Bellman-Ford Example
5
¥ ¥
¥ ¥
0
s
z
y
6
7
8
-3
7
2
9
-2
x
t
-4
6 ¥
7 ¥
0
s
z
y
6
7
8
-3
7
2
9
-2
x
t
-4
5
6 4
7 2
0
s
z
y
6
7
8
-3
7
2
9
-2
x
t
-4
5
2 4
7 2
0
s
z
y
6
7
8
-3
7
2
9
-2
x
t
-4
5
2 4
7 -2
0
s
z
y
6
7
8
-3
7
2
9
-2
x
t
-4
5
Shiwani Gupta 17
Shiwani Gupta 18
All Pair Shortest Path
 Find the distance between every pair of vertices in a
weighted directed graph G.
 We can make n calls to Dijkstra’s algorithm (if no negative
edges), which takes O(n(nlog n+m)) time or O(n3) time.
 Likewise, n calls to Bellman-Ford would take O(n2m) time.
 We can achieve O(n3) time using dynamic programming
 Floyd Warshall’s Algorithm (Generalization of Dijkstra’s)
Shiwani Gupta 19
Floyd Warshall’s Algorithm
Representation of the Input
We assume that the input is represented by a weight matrix
W= (wij); i,j in E that is defined by
wij= 0 if i=j
wij= w(i,j) if ij and (i,j) in E
wij=  if ij and (i,j) not in E
Representation of the Output
If the graph has n vertices, we return a distance matrix (dij), where dij the
length of the path from i to j.
Intermediate Vertices
Without loss of generality, we will assume that V={1,2,…,n}, i.e., that
the vertices of the graph are numbered from 1 to n.
Given a path p=(v1, v2,…, vm) in the graph, we will call the vertices vk
with index k in {2,…,m-1} the intermediate vertices of p.
Shiwani Gupta 20
The key to the Floyd-Warshall algorithm is the following definition:
Let dij
(k) denote the length of the shortest path from i to j such that all
intermediate vertices are contained in the set {1,…,k}.
Consider a shortest path p from i to j such that the intermediate vertices
are from the set {1,…,k}.
 If the vertex k is not an intermediate vertex on p, then dij
(k) = dij
(k-1)
If the vertex k is an intermediate vertex on p, then dij
(k) = dik
(k-1) + dkj
(k-
1)
Interestingly, in either case, the subpaths contain merely nodes from
{1,…,k-1}.
Floyd Warshall’s Algorithm
Shiwani Gupta 21
Recursive Formulation
Therefore, we can conclude that dij
(k) = min{dij
(k-1) , dik
(k-1) + dkj
(k-1)}
If we do not use intermediate nodes, i.e.,
when k=0, then dij
(0) = wij
If k>0, then dij
(k) = min{dij
(k-1) , dik
(k-1) + dkj
(k-1)}
Shiwani Gupta 22
Floyd Warshall algorithm
Algorithm AllPair(G) //assumes vertices 1,…,n
for all vertex pairs (i,j)
if i = j
D0[i,i]  0
else if (i,j) is an edge in G
D0[i,j]  weight of edge (i,j)
else
D0[i,j]  + 
for k  1 to n do
for i  1 to n do
for j  1 to n do
Dk[i,j]  min{Dk-1[i,j], Dk-1[i,k]+Dk-1[k,j]}
return Dn
Can reduce compexity to O (n2) by using single array
Shiwani Gupta 23
Shiwani Gupta 24
Shiwani Gupta 25
Shiwani Gupta 26
Shiwani Gupta 27
Example of Floyd-Warshall
Shiwani Gupta 28
Shiwani Gupta 29
The 0/1 Knapsack Problem
 Given: A set S of n items, with each item i having
 wi - a positive weight
 bi - a positive benefit
 Goal: Choose items with maximum total benefit but with
weight at most W.
 If we are not allowed to take fractional amounts, then this is
the 0/1 knapsack problem.
 In this case, we let T denote the set of items we take
 Objective: maximize
 Constraint:

T
i
i
b



T
i
i W
w
Shiwani Gupta 30
 Given: A set S of n items, with each item i having
 bi - a positive “benefit”
 wi - a positive “weight”
 Goal: Choose items with maximum total benefit but with weight at
most W.
Example
Weight:
Benefit:
1 2 3 4 5
4 in 2 in 2 in 6 in 2 in
$20 $3 $6 $25 $80
Items:
box of width 9 in
Solution:
• item 5 ($80, 2 in)
• item 1 ($20, 4in)
• item 3 ($6, 2in)
“knapsack”
Shiwani Gupta 31
0/1 Knapsack Algorithm, First Attempt
 Sk: Set of items numbered 1 to k.
 Define B[k] = best selection from Sk.
 Problem: does not have subproblem optimality:
 Consider set S={(3,2),(5,4),(8,5),(4,3),(10,9)} of
(benefit, weight) pairs and total weight W = 20
Best for S4:
Best for S5:
Shiwani Gupta 32
0/1 Knapsack Algorithm, Second Attempt
 Sk: Set of items numbered 1 to k.
 Define B[k,w] to be the best selection from Sk with weight at most w
 Good news: this does have subproblem optimality.
i.e., the best subset of Sk with weight at most w is either
 the best subset of Sk-1 with weight at most w or
 the best subset of Sk-1 with weight at most w-wk plus item k










else
}
]
,
1
[
],
,
1
[
max{
if
]
,
1
[
]
,
[
k
k
k
b
w
w
k
B
w
k
B
w
w
w
k
B
w
k
B
Shiwani Gupta 33
https://youtu.be/nLmhmB6NzcM 19:21
0/1 Knapsack Algorithm
Consider set S={(1,1),(2,2),(4,3),(2,2),(5,5)} of (benefit, weight)
pairs and total weight W = 10
Shiwani Gupta 34
0/1 Knapsack Algorithm
Trace back to find the items picked
Shiwani Gupta 35
0/1 Knapsack Algorithm
 Each diagonal arrow corresponds to adding one item into the bag
 Pick items 2,3,5
 {(2,2),(4,3),(5,5)} are what you will take away
Shiwani Gupta 36
0/1 Knapsack Algorithm
Algorithm 01Knapsack(B,w,n,W):
Input: set S of n items with benefit Bi and weight wi; maximum weight W
Output: benefit of best subset of S with weight at most W
for w  0 to W do
B[0,w]  0
for i  1 to n do
for w  0 to W do
if wi ≤ w then
B[i,w]  max{B[i-1,w],B[i]+B[i-1,w-wi]}
keep[i,w]=1 //if keeping ith file in knapsack
else
B[i,w]  B[i-1,w]
keep[i,w]=0
return B[n,W] //max computing time of files that can fit into storage
K=W
for i = n downto 1
if keep[i,k]==1
output i
k=k-w[i]
Running time: O(nW).
Shiwani Gupta 37
Shiwani Gupta 38
Weights: {3, 4, 6, 5}
Profits: {2, 3, 1, 4}
The weight of the knapsack is 8 kg
Travelling Salesman Problem
Let G be a directed graph denoted by (V,E). Edges are given along with
their cost Cij. A tour for the graph should be such that all the vertices
should be visited only once terminating at source vertex with tour of
minimum cost (sum of cost of edges on tour).
Dynamic Programming Approach
• Let function g(1,V-{1}) is total length of tour terminating at 1.
• Let d[i,j] be shortest path b/w two vertices i and j.
• Principle of optimality: Path Vi, Vi+1, … Vj must be optimal for all
paths beginning at Vi and ending at Vj and passing through all
intermediate vertices {Vi+1, … Vj-1} once.
• g(1,V-{1}) = min {c1k + g(k,V-{1,k}) }
• g(i,S) = min {c1j + g(j,S-{j}) } where j є S and i є S
• g(i,Ф) = ci1 , 1≤i ≤n
2≤k ≤n
j є S
Shiwani Gupta 39
https://youtu.be/XaXsJJh-Q5Y
Travelling Salesman Problem
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
g(2, Ф) = c21 = 5; g(3, Ф) = c31 = 6; g(4, Ф) = c41 = 8
• K=1
Set{2}
g(3,{2})= c32 + g(2,Ф) = c32 + c21= 13+5=18; p(3,{2})=2
g(4,{2})= c42 + g(2,Ф) = c42 + c21= 8+5=13; p(4,{2})=2
Set{3}
g(2,{3})= c23 + g(3,Ф) = c23 + c31= 9+6=15; p(2,{3})=3
g(4,{3})= c43 + g(3,Ф) = c43 + c31= 9+6=15; p(4,{3})=3
Set{4}
g(2,{4})= c24 + g(4,Ф) = c24 + c41= 10+8=18; p(2,{4})=4
g(3,{4})= c34 + g(4,Ф) = c34 + c41= 12+8=20; p(3,{4})=4
• k=2
Set{3,4}: g(2,{3,4}) = min {c23 + g(3,{4}), c24 + g(4,{3})}=25; p(2,{3,4})=4
Set{2,4}: g(3,{2,4}) = min {c32 + g(2,{4}), c34 + g(4,{2})}=25; p(3,{2,4})=4
Set{2,3}: g(4,{2,3}) = min {c42 + g(2,{3}), c43 + g(3,{2})}=23; p(4,{2,3})=2
• k=3
Set{2,3,4}: f= g(1,{2,3,4})= min{c12 + g(2,{3,4}), c13 + g(3,{2,4}), c14 + g(4,{2,3})} =
min {10+25, 15+25, 20+23} = 35; p(1,{2,3,4})=2
ans: 12431
40
Shiwani Gupta 41
n!
n*(n*2^n)
Shiwani Gupta 42
Longest Common Subsequence
 Given two strings, find a longest subsequence that they share
 substring vs. subsequence of a string
 Substring: the characters in a substring of S must occur
contiguously in S
 Subsequence: the characters can be interspersed with gaps.
INPUT: two strings
OUTPUT: longest common subsequence
ACTGAACTCTGTGCACT
TGACTCAGCACAAAAAC
Shiwani Gupta 32
Longest common subsequence
Sequences x1,…,xn, and y1,…,ym
LCS(i,j) = length of a longest common
subsequence of x1,…,xi and y1,…,yj
if xi  yj then
LCS(i,j) = max (LCS(i-1,j),LCS(i,j-1))
if xi = yj then
LCS(i,j) = 1 + LCS(i-1,j-1)
Running time = O(mn)
Shiwani Gupta 33
Let’s give a score M an alignment in this way,
M=sum s(xi,yj), where
xi is the i character in the first aligned sequence
yj is the j character in the second aligned sequence
s(xi,yj)= 1 if xi= yj
s(xi,yj)= 0 if xi≠yj or any of them is a gap
The score for alignment:
ababc.
abd.cb
M=s(a,a)+s(b,b)+s(a,d)+s(b,.)+s(c,c)+s(.,b)=3
To find the longest common subsequence between sequences S1 and
S2 is to find the alignment that maximizes score M.
Shiwani Gupta 34
Longest Common Subsequence
Score matrix M Trace back table B
M7,11=6 (lower right corner of Score matrix)
This tells us that the best alignment has a score of 6
What is the best alignment?
Shiwani Gupta 35
Longest Common Subsequence
We need to use trace back table to find out the best alignment,
which has a score of 6
(1) Find the path from
lower right corner to
upper left corner
Thus, the optimal alignment is
The longest common subsequence is
G.A.T.C.G.A
Shiwani Gupta 36
Longest Common Subsequence
Algorithm LCS (string A, string B) {
Input strings A and B
Output the longest common subsequence of A and B
M: Score Matrix
B: trace back table (use letter a, b, c for )
n=A.length()
m=B.length()
// fill in M and B
for (i=0;i<m+1;i++)
for (j=0;j<n+1;j++)
if (i==0) || (j==0)
then M(i,j)=0;
else if (A[i]==B[j])
M(i,j)=M[i-1,j-1]+1
{update the entry in trace table B}
else
M(i,j)=max {M[i-1,j], M[i,j-1]}
{update the entry in trace table B}
then use trace back table B to print out the optimal alignment
Running time = O(mn)
Applications of LCS
 Molecular Biology: DNA sequences (genes) can be
represented as sequences of four letters ACGT, corresponding
to the four submolecules forming DNA. When biologists find
new sequences, they typically want to know what other
sequences it is most similar to. One way of computing how
similar two sequences are is to find the length of their longest
common subsequence.
 File Compression: The Unix program "diff" is used to
compare two different versions of the same file, to determine
what changes have been made to the file. It works by finding a
longest common subsequence of the lines of the two files; any
line in the subsequence has not been changed, so what it
displays is the remaining set of lines that have changed. In this
instance of the problem we should think of each line of a file
as being a single complicated character in a string.
Dynamic Programming to LCS
 Optimal Substructure
 Overlapping subproblems
 The method of dynamic programming reduces the number
of function calls. It stores the result of each function call so
that it can be used in future calls without the need for
redundant calls.
 In the above dynamic algorithm, the results obtained from
each comparison between elements of X and the elements
of Y are stored in a table so that they can be used in future
computations.
 So, the time taken by a dynamic approach is the time taken
to fill the table (ie. O(mn)).
Shiwani Gupta 39
Task
 Determine LCS of <0100110101> and <10101101>.
 Find LCS of following strings X=“ABCBDAB”
Y=“BDCABA”
Shiwani Gupta 40

More Related Content

PPT
Single source stortest path bellman ford and dijkstra
PPT
Bellman Ford's Algorithm
DOC
Unit 3 daa
PPT
Asymptotic notations
PPTX
Bellman ford algorithm
PPT
Graphs
PPT
Breadth first search and depth first search
PPT
SINGLE SOURCE SHORTEST PATH.ppt
Single source stortest path bellman ford and dijkstra
Bellman Ford's Algorithm
Unit 3 daa
Asymptotic notations
Bellman ford algorithm
Graphs
Breadth first search and depth first search
SINGLE SOURCE SHORTEST PATH.ppt

What's hot (20)

PDF
Closest pair problems (Divide and Conquer)
PPTX
Brute force-algorithm
PDF
Topological Sort
PPT
Floyd Warshall Algorithm
PPTX
Kruskal & Prim's Algorithm
PPT
The Floyd–Warshall algorithm
PPT
Trees
DOCX
Best,worst,average case .17581556 045
PDF
Minimum spanning tree
PPT
Np cooks theorem
PPT
Bellman ford algorithm
PPTX
Kruskal Algorithm
PPTX
Longest Common Subsequence
PPTX
Regular expressions
PDF
Floyd warshall algorithm
PPTX
Automata theory - NFA ε to DFA Conversion
PDF
Lecture 3 insertion sort and complexity analysis
PPTX
Recursion DM
PPTX
Binary Tree Traversal
PPTX
Counting, mathematical induction and discrete probability
Closest pair problems (Divide and Conquer)
Brute force-algorithm
Topological Sort
Floyd Warshall Algorithm
Kruskal & Prim's Algorithm
The Floyd–Warshall algorithm
Trees
Best,worst,average case .17581556 045
Minimum spanning tree
Np cooks theorem
Bellman ford algorithm
Kruskal Algorithm
Longest Common Subsequence
Regular expressions
Floyd warshall algorithm
Automata theory - NFA ε to DFA Conversion
Lecture 3 insertion sort and complexity analysis
Recursion DM
Binary Tree Traversal
Counting, mathematical induction and discrete probability
Ad

Similar to module4_dynamic programming_2022.pdf (20)

PDF
module1_Introductiontoalgorithms_2022.pdf
PPT
algorthm analysis from computer scince.ppt
PPTX
Design and Analysis of Algorithm-Lecture.pptx
DOC
algorithm Unit 3
PPTX
Unit 3- Greedy Method.pptx
PPT
Lec-35Graph - Graph - Copy in Data Structure
PPT
dynamic-programming unit 3 power point presentation
PPT
Randomized algorithms ver 1.0
PDF
Unit 3 greedy method
PDF
Unit 3 - Greedy Method
PPT
test pre
PDF
04 greedyalgorithmsii 2x2
PPT
Unit 3-Greedy Method
PDF
module3_Greedymethod_2022.pdf
PPT
Greedy Approach in Design Analysis and Algorithms
PDF
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
PPT
Mixed Integer Programming (MIP) is a type of mathematical optimization Branch...
PDF
module2_dIVIDEncONQUER_2022.pdf
PPTX
Unit ii-ppt
PDF
Skiena algorithm 2007 lecture15 backtracing
module1_Introductiontoalgorithms_2022.pdf
algorthm analysis from computer scince.ppt
Design and Analysis of Algorithm-Lecture.pptx
algorithm Unit 3
Unit 3- Greedy Method.pptx
Lec-35Graph - Graph - Copy in Data Structure
dynamic-programming unit 3 power point presentation
Randomized algorithms ver 1.0
Unit 3 greedy method
Unit 3 - Greedy Method
test pre
04 greedyalgorithmsii 2x2
Unit 3-Greedy Method
module3_Greedymethod_2022.pdf
Greedy Approach in Design Analysis and Algorithms
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Mixed Integer Programming (MIP) is a type of mathematical optimization Branch...
module2_dIVIDEncONQUER_2022.pdf
Unit ii-ppt
Skiena algorithm 2007 lecture15 backtracing
Ad

More from Shiwani Gupta (20)

PPTX
Advanced_NLP_with_Transformers_PPT_final 50.pptx
PDF
Generative Artificial Intelligence and Large Language Model
PDF
ML MODULE 6.pdf
PDF
ML MODULE 5.pdf
PDF
ML MODULE 4.pdf
PDF
module6_stringmatchingalgorithm_2022.pdf
PDF
module5_backtrackingnbranchnbound_2022.pdf
PDF
ML MODULE 1_slideshare.pdf
PDF
ML MODULE 2.pdf
PDF
ML Module 3.pdf
PDF
Problem formulation
PDF
Simplex method
PDF
Functionsandpigeonholeprinciple
PDF
Relations
PDF
PDF
Set theory
PDF
Uncertain knowledge and reasoning
PDF
Introduction to ai
PDF
Planning Agent
PDF
Knowledge based agent
Advanced_NLP_with_Transformers_PPT_final 50.pptx
Generative Artificial Intelligence and Large Language Model
ML MODULE 6.pdf
ML MODULE 5.pdf
ML MODULE 4.pdf
module6_stringmatchingalgorithm_2022.pdf
module5_backtrackingnbranchnbound_2022.pdf
ML MODULE 1_slideshare.pdf
ML MODULE 2.pdf
ML Module 3.pdf
Problem formulation
Simplex method
Functionsandpigeonholeprinciple
Relations
Set theory
Uncertain knowledge and reasoning
Introduction to ai
Planning Agent
Knowledge based agent

Recently uploaded (20)

PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
Construction Project Organization Group 2.pptx
PPTX
additive manufacturing of ss316l using mig welding
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
web development for engineering and engineering
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
Sustainable Sites - Green Building Construction
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPT
Project quality management in manufacturing
PDF
Digital Logic Computer Design lecture notes
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPT
Mechanical Engineering MATERIALS Selection
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Lesson 3_Tessellation.pptx finite Mathematics
Model Code of Practice - Construction Work - 21102022 .pdf
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Construction Project Organization Group 2.pptx
additive manufacturing of ss316l using mig welding
Arduino robotics embedded978-1-4302-3184-4.pdf
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
web development for engineering and engineering
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Sustainable Sites - Green Building Construction
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Internet of Things (IOT) - A guide to understanding
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Project quality management in manufacturing
Digital Logic Computer Design lecture notes
bas. eng. economics group 4 presentation 1.pptx
Mechanical Engineering MATERIALS Selection
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...

module4_dynamic programming_2022.pdf

  • 2. Dynamic Programming  Dynamic Programming is applied to optimization problems.  Solves problems with overlapping subproblems.  Each subproblem solved only once and result recorded in a table. Shiwani Gupta 2
  • 3. Divide n Conquer vs Dynamic Programming  Subproblems are independent  Recomputations performed  Less Efficient due to rework  Recursive Method (Top Down Approach of problem Solving)  Splits its input at specific deterministic points usually in middle  Subproblems are overlapping  No need to recompute  More Efficient  Iterative Method (Bottom Up Approach of problem Solving)  Splits its input at every possible split point Shiwani Gupta 3
  • 4. Greedy vs Dynamic Programming  Used to obtain optimum solution  Picks optimum solution from a set of feasible solutions  Optimum selection is without revising previously generated selections  Only one decision sequence is ever generated  No guarantee of getting optimum solution  Also obtains optimum solution  No special set of feasible solutions  Considers all possible sequences to obtain possible solution  Many decision sequences may be generated  Guarantees optimal solution using Principle of optimality In an optimal sequence of decisions or choices, each subsequence must also be optimal. Shiwani Gupta 4
  • 5. Principle of optimality  Principle of optimality: Suppose that in solving a problem, we have to make a sequence of decisions D1, D2, …, Dn. If this sequence is optimal, then the last k decisions, 1  k  n must be optimal.  e.g. the shortest path problem If i, i1, i2, …, j is a shortest path from i to j, then i1, i2, …, j must be a shortest path from i1 to j Shiwani Gupta 5
  • 6. Applications of Dynamic Programming  Multistage Graphs  Single Source Shortest Path  All Pair Shortest Path  Optimal Binary Search Tree  0/1 Knapsack Problem  Traveling Salesman Problem  Longest Common Subsequence  Flow Shop Scheduling Shiwani Gupta 6
  • 7. Multistage Graph  Multistage Graph G=(V,E) is a directed graph whose vertices are partitioned into k stages, k ≥ 2  To find a shortest path from source to sink  Apply the greedy method : the shortest path from S to T : 1 + 2 + 5 = 8 S A B T 3 4 5 2 7 1 5 6 Shiwani Gupta 7
  • 8. The shortest path in Multistage Graphs The greedy method can not be applied to this case: (S, A, D, T) 1+4+18 = 23. The real shortest path is: (S, C, F, T) 5+2+2 = 9. We obtain min. path at each current stage by considering path length of each vertex obtained in earlier stage. S T 13 2 B E 9 A D 4 C F 2 1 5 11 5 16 18 2 Shiwani Gupta 8
  • 9.  Dynamic programming approach (forward approach):  d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} S T 2 B A C 1 5 d(C, T) d(B, T) d(A, T) A T 4 E D 11 d(E, T) d(D, T)  d(A,T) = min{4+d(D,T), 11+d(E,T)} = min{4+18, 11+13} = 22. Forward approach (backward reasoning) Shiwani Gupta 9
  • 10.  d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18.  d(C, T) = min{ 2+d(F, T) } = 2+2 = 4  d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9.  The above way of reasoning is called backward reasoning. B T 5 E D F 9 16 d(F, T ) d(E, T ) d(D, T) Dynamic programming approach Shiwani Gupta 10
  • 11. Backward approach (forward reasoning)  d(S,T) = min{d(S, D)+d(D, T),d(S,E)+d(E,T), d(S, F)+d(F, T)} = min{d(S, D)+18,d(S,E)+13, d(S, F)+2}  d(S,D) = min{d(S, A)+d(A, D),d(S, B)+d(B, D)} = min{d(S, A)+4,d(S, B)+9} Shiwani Gupta 11 S T 13 2 B E 9 A D 4 C F 2 1 5 11 5 16 18 2
  • 12. Backward approach (forward reasoning)  d(S,E) = min{d(S, A)+d(A, E),d(S, B)+d(B, E)} = min{d(S, A)+11,d(S, B)+5}  d(S,F) = min{d(S, B)+d(B, F),d(S, C)+d(C, F)} = min{d(S, B)+16,d(S, C)+2}  d(S,A) = 1; d(S, B) = 2;d(S, C) = 5 d(S,D)= min{ 1+4, 2+9 } = 5 d(S,E)= min{ 1+11, 2+5 } = 7 d(S,F)= min{ 2+16, 5+2 } = 7 d(S,T) = min{ 5+18, 7+13, 7+2 } = 9 Shiwani Gupta 12 S T 13 2 B E 9 A D 4 C F 2 1 5 11 5 16 18 2
  • 14. Shortest Path Problems  Single-source (single-destination). Find a shortest path from a given source (vertex s) to each of the vertices.  Dijkstra’s Algorithm - Greedy  Bellman-Ford Algorithm – Dynamic Programming  Single-pair. Given two vertices, find a shortest path between them. Solution to single-source problem solves this problem efficiently, too.  All-pairs. Find shortest-paths for every pair of vertices. Dynamic programming algorithm.  Floyd Warshall’s Algorithm – Dynamic Programming  Applications  static/dynamic network routing  robot motion planning  map/route generation in traffic Shiwani Gupta 14
  • 15. Bellman-Ford Algorithm  Dijkstra’s doesn’t work when there are negative edges  Bellman-Ford algorithm detects negative cycles (returns false) or returns the shortest path-tree Algorithm BellmanFord (vertices, edges, source) for(each vertex v) //loop to initialize graph if (v is source) then v.distance ← 0 else v.distance ← infinity v.pred ← null for (i← 1 to tot_vertices-1) //O(n) for (each edge uv) //O(nm) u ← uv.source v ← uv.destination Shiwani Gupta 15
  • 16. Bellman-Ford Algorithm //newly obtained min dist if( v.distance > u.distance + uv.weight ) then v.distance = u.distance + uv.weight //relax edge v.pred ← u for (each edge uv) u ← uv.source v ← uv.destination if( v.distance > u.distance + uv.weight ) then write(“Graph has negative weights”) return false Time Complexity: O (nm) Shiwani Gupta 16
  • 17. Bellman-Ford Example 5 ¥ ¥ ¥ ¥ 0 s z y 6 7 8 -3 7 2 9 -2 x t -4 6 ¥ 7 ¥ 0 s z y 6 7 8 -3 7 2 9 -2 x t -4 5 6 4 7 2 0 s z y 6 7 8 -3 7 2 9 -2 x t -4 5 2 4 7 2 0 s z y 6 7 8 -3 7 2 9 -2 x t -4 5 2 4 7 -2 0 s z y 6 7 8 -3 7 2 9 -2 x t -4 5 Shiwani Gupta 17
  • 19. All Pair Shortest Path  Find the distance between every pair of vertices in a weighted directed graph G.  We can make n calls to Dijkstra’s algorithm (if no negative edges), which takes O(n(nlog n+m)) time or O(n3) time.  Likewise, n calls to Bellman-Ford would take O(n2m) time.  We can achieve O(n3) time using dynamic programming  Floyd Warshall’s Algorithm (Generalization of Dijkstra’s) Shiwani Gupta 19
  • 20. Floyd Warshall’s Algorithm Representation of the Input We assume that the input is represented by a weight matrix W= (wij); i,j in E that is defined by wij= 0 if i=j wij= w(i,j) if ij and (i,j) in E wij=  if ij and (i,j) not in E Representation of the Output If the graph has n vertices, we return a distance matrix (dij), where dij the length of the path from i to j. Intermediate Vertices Without loss of generality, we will assume that V={1,2,…,n}, i.e., that the vertices of the graph are numbered from 1 to n. Given a path p=(v1, v2,…, vm) in the graph, we will call the vertices vk with index k in {2,…,m-1} the intermediate vertices of p. Shiwani Gupta 20
  • 21. The key to the Floyd-Warshall algorithm is the following definition: Let dij (k) denote the length of the shortest path from i to j such that all intermediate vertices are contained in the set {1,…,k}. Consider a shortest path p from i to j such that the intermediate vertices are from the set {1,…,k}.  If the vertex k is not an intermediate vertex on p, then dij (k) = dij (k-1) If the vertex k is an intermediate vertex on p, then dij (k) = dik (k-1) + dkj (k- 1) Interestingly, in either case, the subpaths contain merely nodes from {1,…,k-1}. Floyd Warshall’s Algorithm Shiwani Gupta 21
  • 22. Recursive Formulation Therefore, we can conclude that dij (k) = min{dij (k-1) , dik (k-1) + dkj (k-1)} If we do not use intermediate nodes, i.e., when k=0, then dij (0) = wij If k>0, then dij (k) = min{dij (k-1) , dik (k-1) + dkj (k-1)} Shiwani Gupta 22
  • 23. Floyd Warshall algorithm Algorithm AllPair(G) //assumes vertices 1,…,n for all vertex pairs (i,j) if i = j D0[i,i]  0 else if (i,j) is an edge in G D0[i,j]  weight of edge (i,j) else D0[i,j]  +  for k  1 to n do for i  1 to n do for j  1 to n do Dk[i,j]  min{Dk-1[i,j], Dk-1[i,k]+Dk-1[k,j]} return Dn Can reduce compexity to O (n2) by using single array Shiwani Gupta 23
  • 30. The 0/1 Knapsack Problem  Given: A set S of n items, with each item i having  wi - a positive weight  bi - a positive benefit  Goal: Choose items with maximum total benefit but with weight at most W.  If we are not allowed to take fractional amounts, then this is the 0/1 knapsack problem.  In this case, we let T denote the set of items we take  Objective: maximize  Constraint:  T i i b    T i i W w Shiwani Gupta 30
  • 31.  Given: A set S of n items, with each item i having  bi - a positive “benefit”  wi - a positive “weight”  Goal: Choose items with maximum total benefit but with weight at most W. Example Weight: Benefit: 1 2 3 4 5 4 in 2 in 2 in 6 in 2 in $20 $3 $6 $25 $80 Items: box of width 9 in Solution: • item 5 ($80, 2 in) • item 1 ($20, 4in) • item 3 ($6, 2in) “knapsack” Shiwani Gupta 31
  • 32. 0/1 Knapsack Algorithm, First Attempt  Sk: Set of items numbered 1 to k.  Define B[k] = best selection from Sk.  Problem: does not have subproblem optimality:  Consider set S={(3,2),(5,4),(8,5),(4,3),(10,9)} of (benefit, weight) pairs and total weight W = 20 Best for S4: Best for S5: Shiwani Gupta 32
  • 33. 0/1 Knapsack Algorithm, Second Attempt  Sk: Set of items numbered 1 to k.  Define B[k,w] to be the best selection from Sk with weight at most w  Good news: this does have subproblem optimality. i.e., the best subset of Sk with weight at most w is either  the best subset of Sk-1 with weight at most w or  the best subset of Sk-1 with weight at most w-wk plus item k           else } ] , 1 [ ], , 1 [ max{ if ] , 1 [ ] , [ k k k b w w k B w k B w w w k B w k B Shiwani Gupta 33 https://youtu.be/nLmhmB6NzcM 19:21
  • 34. 0/1 Knapsack Algorithm Consider set S={(1,1),(2,2),(4,3),(2,2),(5,5)} of (benefit, weight) pairs and total weight W = 10 Shiwani Gupta 34
  • 35. 0/1 Knapsack Algorithm Trace back to find the items picked Shiwani Gupta 35
  • 36. 0/1 Knapsack Algorithm  Each diagonal arrow corresponds to adding one item into the bag  Pick items 2,3,5  {(2,2),(4,3),(5,5)} are what you will take away Shiwani Gupta 36
  • 37. 0/1 Knapsack Algorithm Algorithm 01Knapsack(B,w,n,W): Input: set S of n items with benefit Bi and weight wi; maximum weight W Output: benefit of best subset of S with weight at most W for w  0 to W do B[0,w]  0 for i  1 to n do for w  0 to W do if wi ≤ w then B[i,w]  max{B[i-1,w],B[i]+B[i-1,w-wi]} keep[i,w]=1 //if keeping ith file in knapsack else B[i,w]  B[i-1,w] keep[i,w]=0 return B[n,W] //max computing time of files that can fit into storage K=W for i = n downto 1 if keep[i,k]==1 output i k=k-w[i] Running time: O(nW). Shiwani Gupta 37
  • 38. Shiwani Gupta 38 Weights: {3, 4, 6, 5} Profits: {2, 3, 1, 4} The weight of the knapsack is 8 kg
  • 39. Travelling Salesman Problem Let G be a directed graph denoted by (V,E). Edges are given along with their cost Cij. A tour for the graph should be such that all the vertices should be visited only once terminating at source vertex with tour of minimum cost (sum of cost of edges on tour). Dynamic Programming Approach • Let function g(1,V-{1}) is total length of tour terminating at 1. • Let d[i,j] be shortest path b/w two vertices i and j. • Principle of optimality: Path Vi, Vi+1, … Vj must be optimal for all paths beginning at Vi and ending at Vj and passing through all intermediate vertices {Vi+1, … Vj-1} once. • g(1,V-{1}) = min {c1k + g(k,V-{1,k}) } • g(i,S) = min {c1j + g(j,S-{j}) } where j є S and i є S • g(i,Ф) = ci1 , 1≤i ≤n 2≤k ≤n j є S Shiwani Gupta 39 https://youtu.be/XaXsJJh-Q5Y
  • 40. Travelling Salesman Problem 1 2 3 4 1 0 10 15 20 2 5 0 9 10 3 6 13 0 12 4 8 8 9 0 g(2, Ф) = c21 = 5; g(3, Ф) = c31 = 6; g(4, Ф) = c41 = 8 • K=1 Set{2} g(3,{2})= c32 + g(2,Ф) = c32 + c21= 13+5=18; p(3,{2})=2 g(4,{2})= c42 + g(2,Ф) = c42 + c21= 8+5=13; p(4,{2})=2 Set{3} g(2,{3})= c23 + g(3,Ф) = c23 + c31= 9+6=15; p(2,{3})=3 g(4,{3})= c43 + g(3,Ф) = c43 + c31= 9+6=15; p(4,{3})=3 Set{4} g(2,{4})= c24 + g(4,Ф) = c24 + c41= 10+8=18; p(2,{4})=4 g(3,{4})= c34 + g(4,Ф) = c34 + c41= 12+8=20; p(3,{4})=4 • k=2 Set{3,4}: g(2,{3,4}) = min {c23 + g(3,{4}), c24 + g(4,{3})}=25; p(2,{3,4})=4 Set{2,4}: g(3,{2,4}) = min {c32 + g(2,{4}), c34 + g(4,{2})}=25; p(3,{2,4})=4 Set{2,3}: g(4,{2,3}) = min {c42 + g(2,{3}), c43 + g(3,{2})}=23; p(4,{2,3})=2 • k=3 Set{2,3,4}: f= g(1,{2,3,4})= min{c12 + g(2,{3,4}), c13 + g(3,{2,4}), c14 + g(4,{2,3})} = min {10+25, 15+25, 20+23} = 35; p(1,{2,3,4})=2 ans: 12431 40
  • 43. Longest Common Subsequence  Given two strings, find a longest subsequence that they share  substring vs. subsequence of a string  Substring: the characters in a substring of S must occur contiguously in S  Subsequence: the characters can be interspersed with gaps. INPUT: two strings OUTPUT: longest common subsequence ACTGAACTCTGTGCACT TGACTCAGCACAAAAAC Shiwani Gupta 32
  • 44. Longest common subsequence Sequences x1,…,xn, and y1,…,ym LCS(i,j) = length of a longest common subsequence of x1,…,xi and y1,…,yj if xi  yj then LCS(i,j) = max (LCS(i-1,j),LCS(i,j-1)) if xi = yj then LCS(i,j) = 1 + LCS(i-1,j-1) Running time = O(mn) Shiwani Gupta 33
  • 45. Let’s give a score M an alignment in this way, M=sum s(xi,yj), where xi is the i character in the first aligned sequence yj is the j character in the second aligned sequence s(xi,yj)= 1 if xi= yj s(xi,yj)= 0 if xi≠yj or any of them is a gap The score for alignment: ababc. abd.cb M=s(a,a)+s(b,b)+s(a,d)+s(b,.)+s(c,c)+s(.,b)=3 To find the longest common subsequence between sequences S1 and S2 is to find the alignment that maximizes score M. Shiwani Gupta 34
  • 46. Longest Common Subsequence Score matrix M Trace back table B M7,11=6 (lower right corner of Score matrix) This tells us that the best alignment has a score of 6 What is the best alignment? Shiwani Gupta 35
  • 47. Longest Common Subsequence We need to use trace back table to find out the best alignment, which has a score of 6 (1) Find the path from lower right corner to upper left corner Thus, the optimal alignment is The longest common subsequence is G.A.T.C.G.A Shiwani Gupta 36
  • 48. Longest Common Subsequence Algorithm LCS (string A, string B) { Input strings A and B Output the longest common subsequence of A and B M: Score Matrix B: trace back table (use letter a, b, c for ) n=A.length() m=B.length() // fill in M and B for (i=0;i<m+1;i++) for (j=0;j<n+1;j++) if (i==0) || (j==0) then M(i,j)=0; else if (A[i]==B[j]) M(i,j)=M[i-1,j-1]+1 {update the entry in trace table B} else M(i,j)=max {M[i-1,j], M[i,j-1]} {update the entry in trace table B} then use trace back table B to print out the optimal alignment Running time = O(mn)
  • 49. Applications of LCS  Molecular Biology: DNA sequences (genes) can be represented as sequences of four letters ACGT, corresponding to the four submolecules forming DNA. When biologists find new sequences, they typically want to know what other sequences it is most similar to. One way of computing how similar two sequences are is to find the length of their longest common subsequence.  File Compression: The Unix program "diff" is used to compare two different versions of the same file, to determine what changes have been made to the file. It works by finding a longest common subsequence of the lines of the two files; any line in the subsequence has not been changed, so what it displays is the remaining set of lines that have changed. In this instance of the problem we should think of each line of a file as being a single complicated character in a string.
  • 50. Dynamic Programming to LCS  Optimal Substructure  Overlapping subproblems  The method of dynamic programming reduces the number of function calls. It stores the result of each function call so that it can be used in future calls without the need for redundant calls.  In the above dynamic algorithm, the results obtained from each comparison between elements of X and the elements of Y are stored in a table so that they can be used in future computations.  So, the time taken by a dynamic approach is the time taken to fill the table (ie. O(mn)). Shiwani Gupta 39
  • 51. Task  Determine LCS of <0100110101> and <10101101>.  Find LCS of following strings X=“ABCBDAB” Y=“BDCABA” Shiwani Gupta 40