SlideShare a Scribd company logo
Chapter 8Chapter 8
Dynamic ProgrammingDynamic Programming
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
8-2Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Dynamic ProgrammingDynamic Programming
DDynamic Programmingynamic Programming is a general algorithm design techniqueis a general algorithm design technique
for solving problems defined by or formulated as recurrencesfor solving problems defined by or formulated as recurrences
with overlapping subinstanceswith overlapping subinstances
• Invented by American mathematician Richard Bellman in theInvented by American mathematician Richard Bellman in the
1950s to solve optimization problems and later assimilated by CS1950s to solve optimization problems and later assimilated by CS
• ““Programming” here means “planning”Programming” here means “planning”
• Main idea:Main idea:
- set up a recurrence relating a solution to a larger instanceset up a recurrence relating a solution to a larger instance
to solutions of some smaller instancesto solutions of some smaller instances
- solve smaller instances once- solve smaller instances once
- record solutions in a tablerecord solutions in a table
- extract solution to the initial instance from that tableextract solution to the initial instance from that table
8-3Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Example: Fibonacci numbersExample: Fibonacci numbers
• Recall definition of Fibonacci numbers:Recall definition of Fibonacci numbers:
FF((nn)) = F= F((nn-1)-1) + F+ F((nn-2)-2)
FF(0)(0) == 00
FF(1)(1) == 11
• Computing theComputing the nnthth
Fibonacci number recursively (top-down):Fibonacci number recursively (top-down):
FF((nn))
FF((n-n-1)1) + F+ F((n-n-2)2)
FF((n-n-2)2) + F+ F((n-n-3)3) FF((n-n-3)3) + F+ F((n-n-4)4)
......
8-4Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Example: Fibonacci numbers (cont.)Example: Fibonacci numbers (cont.)
Computing theComputing the nnthth
Fibonacci number using bottom-up iteration andFibonacci number using bottom-up iteration and
recording results:recording results:
FF(0)(0) == 00
FF(1)(1) == 11
FF(2)(2) == 1+0 = 11+0 = 1
……
FF((nn-2) =-2) =
FF((nn-1) =-1) =
FF((nn) =) = FF((nn-1)-1) + F+ F((nn-2)-2)
Efficiency:Efficiency:
- time- time
- space- space
0 1 1 . . . F(n-2) F(n-1) F(n)
n
n
What if we solve
it recursively?
8-5Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Examples of DP algorithmsExamples of DP algorithms
• Computing a binomial coefficientComputing a binomial coefficient
• Longest common subsequenceLongest common subsequence
• Warshall’s algorithm for transitive closureWarshall’s algorithm for transitive closure
• Floyd’s algorithm for all-pairs shortest pathsFloyd’s algorithm for all-pairs shortest paths
• Constructing an optimal binary search treeConstructing an optimal binary search tree
• Some instances of difficult discrete optimization problems:Some instances of difficult discrete optimization problems:
- traveling salesman- traveling salesman
- knapsack- knapsack
8-6Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Computing a binomial coefficient by DPComputing a binomial coefficient by DP
Binomial coefficients are coefficients of the binomial formula:Binomial coefficients are coefficients of the binomial formula:
((a + ba + b))nn
== CC((nn,0),0)aann
bb00
+ . . . ++ . . . + CC((nn,,kk))aan-kn-k
bbkk
+ . . . +. . . + CC((nn,,nn))aa00
bbnn
Recurrence:Recurrence: CC((nn,,kk) =) = CC((n-n-1,1,kk) +) + CC((nn-1,-1,kk-1) for-1) for n > kn > k > 0> 0
CC((nn,0) = 1,,0) = 1, CC((nn,,nn) = 1 for) = 1 for nn ≥≥ 00
Value ofValue of CC((nn,,kk) can be computed by filling a table:) can be computed by filling a table:
0 1 2 . . .0 1 2 . . . kk-1-1 kk
0 10 1
1 1 11 1 1
..
..
..
n-n-11 CC((n-n-1,1,kk-1)-1) CC((n-n-1,1,kk))
nn CC((nn,,kk))
8-7Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
ComputingComputing CC((n,kn,k): pseudocode and analysis): pseudocode and analysis
Time efficiency:Time efficiency: ΘΘ((nknk))
Space efficiency:Space efficiency: ΘΘ((nknk))
8-8Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Knapsack Problem by DPKnapsack Problem by DP
GivenGiven nn items ofitems of
integer weights:integer weights: ww11 ww22 … w… wnn
values:values: vv11 vv22 … v… vnn
a knapsack of integer capacitya knapsack of integer capacity WW
find most valuable subset of the items that fit into the knapsackfind most valuable subset of the items that fit into the knapsack
Consider instance defined by firstConsider instance defined by first ii items and capacityitems and capacity jj ((jj ≤≤ WW))..
LetLet VV[[ii,,jj] be optimal value of such an instance. Then] be optimal value of such an instance. Then
max {max {VV[[ii-1,-1,jj],], vvii ++ VV[[ii-1,-1,j-j- wwii]} if]} if j-j- wwii ≥≥ 00
VV[[ii,,jj] =] =
VV[[ii-1,-1,jj] if] if j-j- wwii < 0< 0
Initial conditions:Initial conditions: VV[0,[0,jj] = 0 and] = 0 and VV[[ii,0] = 0,0] = 0
{
8-9Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Knapsack Problem by DP (example)Knapsack Problem by DP (example)
Example: Knapsack of capacityExample: Knapsack of capacity WW = 5= 5
item weight valueitem weight value
1 2 $121 2 $12
2 1 $102 1 $10
3 3 $203 3 $20
4 2 $15 capacity4 2 $15 capacity jj
0 1 2 3 40 1 2 3 4 55
00
ww11 = 2,= 2, vv11==12 112 1
ww22 = 1,= 1, vv22==10 210 2
ww33 = 3,= 3, vv33==20 320 3
ww44 = 2,= 2, vv44==15 415 4 ??
0 0 0
0 0 12
0 10 12 22 22 22
0 10 12 22 30 32
0 10 15 25 30 37
Backtracing
finds the actual
optimal subset,
i.e. solution.
8-10Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Knapsack Problem by DP (pseudocode)Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(Algorithm DPKnapsack(ww[[11....nn],], vv[[1..n1..n],], WW))
varvar VV[[0..n,0..W0..n,0..W]], P, P[[1..n,1..W1..n,1..W]]:: intint
forfor j := 0j := 0 toto WW dodo
VV[[0,j0,j] :=] := 00
forfor i := 0i := 0 toto nn dodo
VV[[i,0i,0]] := 0:= 0
forfor i := 1i := 1 toto nn dodo
forfor j := 1j := 1 toto WW dodo
ifif ww[[ii]] ≤≤ jj andand vv[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]] > V> V[[i-1,ji-1,j] then] then
VV[[i,ji,j]] := v:= v[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]]; P; P[[i,ji,j]] := j-w:= j-w[[ii]]
elseelse
VV[[i,ji,j]] := V:= V[[i-1,ji-1,j]]; P; P[[i,ji,j]] := j:= j
returnreturn VV[[n,Wn,W] and the optimal subset by backtracing] and the optimal subset by backtracing
Running time and space:
O(nW).
8-11Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Longest Common Subsequence (LCS)Longest Common Subsequence (LCS)
A subsequence of a sequence/stringA subsequence of a sequence/string SS is obtained byis obtained by
deleting zero or more symbols fromdeleting zero or more symbols from SS. For example, the. For example, the
following arefollowing are somesome subsequences of “president”: pred, sdn,subsequences of “president”: pred, sdn,
predent. In other words, the letters of a subsequence of Spredent. In other words, the letters of a subsequence of S
appear in order inappear in order in SS, but they are not required to be, but they are not required to be
consecutive.consecutive.
The longest common subsequence problem is to find aThe longest common subsequence problem is to find a
maximum length common subsequence between twomaximum length common subsequence between two
sequences.sequences.
8-12Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
LCSLCS
For instance,For instance,
Sequence 1: presidentSequence 1: president
Sequence 2: providenceSequence 2: providence
Its LCS is priden.Its LCS is priden.
president
providence
8-13Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
LCSLCS
Another example:Another example:
Sequence 1: algorithmSequence 1: algorithm
Sequence 2: alignmentSequence 2: alignment
One of its LCS is algm.One of its LCS is algm.
a l g o r i t h m
a l i g n m e n t
8-14Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
How to compute LCS?How to compute LCS?
Let ALet A=a=a11aa22…a…amm andand B=bB=b11bb22…b…bnn ..
lenlen((i, ji, j): the length of an LCS between): the length of an LCS between
aa11aa22…a…aii andand bb11bb22…b…bjj
With proper initializations,With proper initializations, lenlen((i, ji, j) can be computed as follows.) can be computed as follows.
,
.and0,if)),1(),1,(max(
and0,if1)1,1(
,0or0if0
),(





≠>−−
=>+−−
==
=
ji
ji
bajijilenjilen
bajijilen
ji
jilen
8-15Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
procedure LCS-Length(A, B)
1. for i ← 0 to m dolen(i,0) = 0
2. for j ← 1 to n dolen(0,j) = 0
3. for i ← 1 to m do
4. for j ← 1 to n do
5. if ji ba = then 


=
+−−=
""),(
1)1,1(),(
jiprev
jilenjilen
6. else if )1,(),1( −≥− jilenjilen
7. then 


=
−=
""),(
),1(),(
jiprev
jilenjilen
8. else 


=
−=
""),(
)1,(),(
jiprev
jilenjilen
9. return len and prev
8-16Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
i j 0 1
p
2
r
3
o
4
v
5
i
6
d
7
e
8
n
9
c
10
e
0 0 0 0 0 0 0 0 0 0 0 0
1 p
2
0 1 1 1 1 1 1 1 1 1 1
2 r 0 1 2 2 2 2 2 2 2 2 2
3 e 0 1 2 2 2 2 2 3 3 3 3
4 s 0 1 2 2 2 2 2 3 3 3 3
5 i 0 1 2 2 2 3 3 3 3 3 3
6 d 0 1 2 2 2 3 4 4 4 4 4
7 e 0 1 2 2 2 3 4 5 5 5 5
8 n 0 1 2 2 2 3 4 5 6 6 6
9 t 0 1 2 2 2 3 4 5 6 6 6
Running time and memory: O(mn) and O(mn).
8-17Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
procedure Output-LCS(A, prev, i, j)
1 if i = 0 or j = 0 then return
2 if prev(i, j)=” “ then 

 −−−
ia
jiprevALCSOutput
print
)1,1,,(
3 else if prev(i, j)=” “ then Output-LCS(A, prev, i-1, j)
4 else Output-LCS(A, prev, i, j-1)
The backtracing algorithm
8-18Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
i j 0 1
p
2
r
3
o
4
v
5
i
6
d
7
e
8
n
9
c
10
e
0 0 0 0 0 0 0 0 0 0 0 0
1 p
2
0 1 1 1 1 1 1 1 1 1 1
2 r 0 1 2 2 2 2 2 2 2 2 2
3 e 0 1 2 2 2 2 2 3 3 3 3
4 s 0 1 2 2 2 2 2 3 3 3 3
5 i 0 1 2 2 2 3 3 3 3 3 3
6 d 0 1 2 2 2 3 4 4 4 4 4
7 e 0 1 2 2 2 3 4 5 5 5 5
8 n 0 1 2 2 2 3 4 5 6 6 6
9 t 0 1 2 2 2 3 4 5 6 6 6
Output: priden
8-19Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm: Transitive ClosureWarshall’s Algorithm: Transitive Closure
• Computes the transitive closure of a relationComputes the transitive closure of a relation
• Alternatively: existence of all nontrivial paths in a digraphAlternatively: existence of all nontrivial paths in a digraph
• Example of transitive closure:Example of transitive closure:
3
4
2
1
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
0 0 1 0
1 1 11 1 1
0 0 0 0
11 1 1 11 1
3
4
2
1
8-20Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s AlgorithmWarshall’s Algorithm
Constructs transitive closureConstructs transitive closure TT as the last matrix in the sequenceas the last matrix in the sequence
ofof nn-by--by-nn matricesmatrices RR(0)(0)
, … ,, … , RR((kk))
, … ,, … , RR((nn))
wherewhere
RR((kk))
[[ii,,jj] = 1 iff there is nontrivial path from] = 1 iff there is nontrivial path from ii toto jj with only thewith only the
firstfirst kk vertices allowed as intermediatevertices allowed as intermediate
Note thatNote that RR(0)(0)
== AA (adjacency matrix)(adjacency matrix),, RR((nn))
= T= T (transitive closure)(transitive closure)
3
42
1
3
42
1
3
42
1
3
42
1
R(0)
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(1)
0 0 1 0
1 0 11 1
0 0 0 0
0 1 0 0
R(2)
0 0 1 0
1 0 1 1
0 0 0 0
11 1 1 11 1
R(3)
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(4)
0 0 1 0
1 11 1 1
0 0 0 0
1 1 1 1
3
42
1
8-21Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (recurrence)Warshall’s Algorithm (recurrence)
On theOn the k-k-th iteration, the algorithm determines for every pair ofth iteration, the algorithm determines for every pair of
verticesvertices i, ji, j if a path exists fromif a path exists from ii andand jj with just vertices 1,…,with just vertices 1,…,kk
allowedallowed asas intermediateintermediate
RR((kk-1)-1)
[[i,ji,j]] (path using just 1 ,…,(path using just 1 ,…,k-k-1)1)
RR((kk))
[[i,ji,j] =] = oror
RR((kk-1)-1)
[[i,ki,k] and] and RR((kk-1)-1)
[[k,jk,j]] (path from(path from ii toto kk
and fromand from kk toto jj
using just 1 ,…,using just 1 ,…,k-k-1)1)
i
j
k
{
Initial condition?
8-22Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (matrix generation)Warshall’s Algorithm (matrix generation)
Recurrence relating elementsRecurrence relating elements RR((kk))
to elements ofto elements of RR((kk-1)-1)
is:is:
RR((kk))
[[i,ji,j] =] = RR((kk-1)-1)
[[i,ji,j] or] or ((RR((kk-1)-1)
[[i,ki,k] and] and RR((kk-1)-1)
[[k,jk,j])])
It implies the following rules for generatingIt implies the following rules for generating RR((kk))
fromfrom RR((kk-1)-1)
::
Rule 1Rule 1 If an element in rowIf an element in row ii and columnand column jj is 1 inis 1 in RR((k-k-1)1)
,,
it remains 1 init remains 1 in RR((kk))
Rule 2Rule 2 If an element in rowIf an element in row ii and columnand column jj is 0 inis 0 in RR((k-k-1)1)
,,
it has to be changed to 1 init has to be changed to 1 in RR((kk))
if and only ifif and only if
the element in its rowthe element in its row ii and columnand column kk and the elementand the element
in its columnin its column jj and rowand row kk are both 1’s inare both 1’s in RR((k-k-1)1)
8-23Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (example)Warshall’s Algorithm (example)
3
42
1 0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(0) =
0 0 1 0
1 0 1 1
0 0 0 0
0 1 0 0
R(1) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(2) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(3) =
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
R(4) =
8-24Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (pseudocode and analysis)Warshall’s Algorithm (pseudocode and analysis)
Time efficiency:Time efficiency: ΘΘ((nn33
))
Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors
(with some care), so it’s(with some care), so it’s ΘΘ((nn^2).^2).
8-25Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm: All pairs shortest pathsFloyd’s Algorithm: All pairs shortest paths
Problem: In a weighted (di)graph, find shortest paths betweenProblem: In a weighted (di)graph, find shortest paths between
every pair of verticesevery pair of vertices
Same idea: construct solution through series of matricesSame idea: construct solution through series of matrices DD(0)(0)
, …,, …,
DD ((nn))
using increasing subsets of the vertices allowedusing increasing subsets of the vertices allowed
as intermediateas intermediate
Example:Example: 3
4
2
1
4
1
6
1
5
3
0 ∞ 4 ∞
1 0 4 3
∞ ∞ 0 ∞
6 5 1 0
8-26Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm (matrix generation)Floyd’s Algorithm (matrix generation)
On theOn the k-k-th iteration, the algorithm determines shortest pathsth iteration, the algorithm determines shortest paths
between every pair of verticesbetween every pair of vertices i, ji, j that use only vertices among 1,that use only vertices among 1,
…,…,kk as intermediateas intermediate
DD((kk))
[[i,ji,j] = min {] = min {DD((kk-1)-1)
[[i,ji,j],], DD((kk-1)-1)
[[i,ki,k] +] + DD((kk-1)-1)
[[k,jk,j]}]}
i
j
k
DD((kk-1)-1)
[[i,ji,j]]
DD((kk-1)-1)
[[i,ki,k]]
DD((kk-1)-1)
[[k,jk,j]]
Initial condition?
8-27Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm (example)Floyd’s Algorithm (example)
0 ∞ 3 ∞
2 0 ∞ ∞
∞ 7 0 1
6 ∞ ∞ 0
D(0) =
0 ∞ 3 ∞
2 0 5 ∞
∞ 7 0 1
6 ∞ 9 0
D(1) =
0 ∞ 3 ∞
2 0 5 ∞
9 7 0 1
6 ∞ 9 0
D(2) =
0 10 3 4
2 0 5 6
9 7 0 1
6 16 9 0
D(3) =
0 10 3 4
2 0 5 6
7 7 0 1
6 16 9 0
D(4) =
3
1
3
2
6 7
4
1 2
8-28Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm (pseudocode and analysis)Floyd’s Algorithm (pseudocode and analysis)
Time efficiency:Time efficiency: ΘΘ((nn33
))
Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors
Note: Works on graphs with negative edges but without negative cycles.Note: Works on graphs with negative edges but without negative cycles.
Shortest paths themselves can be found, too.Shortest paths themselves can be found, too. How?How?
If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k
Since the superscripts k or k-1 make
no difference to D[i,k] and D[k,j].
8-29Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Optimal Binary Search TreesOptimal Binary Search Trees
Problem: GivenProblem: Given nn keyskeys aa11 < …<< …< aann and probabilitiesand probabilities pp11,, …,…, ppnn
searching for them, find a BST with a minimumsearching for them, find a BST with a minimum
average number of comparisons in successful search.average number of comparisons in successful search.
Since total number of BSTs withSince total number of BSTs with nn nodes is given by C(2nodes is given by C(2nn,,nn)/)/
((nn+1), which grows exponentially, brute force is hopeless.+1), which grows exponentially, brute force is hopeless.
Example: What is an optimal BST for keysExample: What is an optimal BST for keys AA,, BB,, CC, and, and DD withwith
search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?
D
A
B
C
Average # of comparisons
= 1*0.4 + 2*(0.2+0.3) + 3*0.1
= 1.7
8-30Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
DP for Optimal BST ProblemDP for Optimal BST Problem
LetLet CC[[i,ji,j] be minimum average number of comparisons made in] be minimum average number of comparisons made in
T[T[i,ji,j], optimal BST for keys], optimal BST for keys aaii < …<< …< aajj ,, where 1 ≤where 1 ≤ ii ≤≤ jj ≤≤ n.n.
Consider optimal BST among all BSTs with someConsider optimal BST among all BSTs with some aakk ((ii ≤≤ kk ≤≤ jj ))
as their root; T[as their root; T[i,ji,j] is the best among them.] is the best among them.
a
Optimal
BST for
a , ..., a
Optimal
BST for
a , ..., ai
k
k-1 k+1 j
CC[[i,ji,j] =] =
min {min {ppkk ·· 1 +1 +
∑∑ ppss (level(level aass in T[in T[i,k-i,k-1] +1)1] +1) ++
∑∑ ppss (level(level aass in T[in T[k+k+11,j,j] +1)}] +1)}
ii ≤≤ kk ≤≤ jj
ss == ii
k-k-11
s =s =k+k+11
jj
8-31Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
goal0
0
C[i,j]
0
1
n+1
0 1 n
p 1
p2
np
i
j
DP for Optimal BST Problem (cont.)DP for Optimal BST Problem (cont.)
After simplifications, we obtain the recurrence forAfter simplifications, we obtain the recurrence for CC[[i,ji,j]:]:
CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ ppss forfor 11 ≤≤ ii ≤≤ jj ≤≤ nn
CC[[i,ii,i] =] = ppii for 1for 1 ≤≤ ii ≤≤ jj ≤≤ nn
ss == ii
jj
ii ≤≤ kk ≤≤ jj
Example: keyExample: key A B C DA B C D
probability 0.1 0.2 0.4 0.3probability 0.1 0.2 0.4 0.3
The tables below are filled diagonal by diagonal: the left one is filledThe tables below are filled diagonal by diagonal: the left one is filled
using the recurrenceusing the recurrence
CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ pps ,s , CC[[i,ii,i] =] = ppii ;;
the right one, for trees’ roots, recordsthe right one, for trees’ roots, records kk’s values giving the minima’s values giving the minima
00 11 22 33 44
11 00 .1.1 .4.4 1.11.1 1.71.7
22 00 .2.2 .8.8 1.41.4
33 00 .4.4 1.01.0
44 00 .3.3
55 00
00 11 22 33 44
11 11 22 33 33
22 22 33 33
33 33 33
44 44
55
ii ≤≤ kk ≤≤ jj ss == ii
jj
optimal BSToptimal BST
B
A
C
D
ii
jj
ii
jj
8-33Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Optimal Binary Search TreesOptimal Binary Search Trees
8-34Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Analysis DP for Optimal BST ProblemAnalysis DP for Optimal BST Problem
Time efficiency:Time efficiency: ΘΘ((nn33
) but can be reduced to) but can be reduced to ΘΘ((nn22
)) by takingby taking
advantage of monotonicity of entries in theadvantage of monotonicity of entries in the
root table, i.e.,root table, i.e., RR[[i,ji,j] is always in the range] is always in the range
betweenbetween RR[[i,ji,j-1] and R[-1] and R[ii+1,j]+1,j]
Space efficiency:Space efficiency: ΘΘ((nn22
))
Method can be expanded to include unsuccessful searchesMethod can be expanded to include unsuccessful searches

More Related Content

PPTX
sum of subset problem using Backtracking
PPT
Divide and Conquer
PPTX
NP completeness
PPTX
Activity selection problem
PPT
Dynamic programming
PDF
Matrix chain multiplication
PPT
finding Min and max element from given array using divide & conquer
PPTX
Greedy Algorithm - Knapsack Problem
sum of subset problem using Backtracking
Divide and Conquer
NP completeness
Activity selection problem
Dynamic programming
Matrix chain multiplication
finding Min and max element from given array using divide & conquer
Greedy Algorithm - Knapsack Problem

What's hot (20)

PDF
Shortest path algorithms
PPTX
Context free grammar
PPT
Lower bound
PDF
P, NP, NP-Complete, and NP-Hard
PPT
Randomizing quicksort algorith with example
PPTX
daa-unit-3-greedy method
PPTX
Kruskal Algorithm
PPTX
Mathematical Analysis of Recursive Algorithm.
PPTX
Knapsack Problem
PPTX
Fermat and euler theorem
PPTX
Stressen's matrix multiplication
PPT
SINGLE-SOURCE SHORTEST PATHS
PPTX
Introduction TO Finite Automata
PPT
Knapsack problem
PPTX
Analysis of algorithm
PPTX
Strassen's matrix multiplication
PPT
DESIGN AND ANALYSIS OF ALGORITHMS
PPTX
Single source Shortest path algorithm with example
PPT
PPTX
Computer Graphic - Lines, Circles and Ellipse
Shortest path algorithms
Context free grammar
Lower bound
P, NP, NP-Complete, and NP-Hard
Randomizing quicksort algorith with example
daa-unit-3-greedy method
Kruskal Algorithm
Mathematical Analysis of Recursive Algorithm.
Knapsack Problem
Fermat and euler theorem
Stressen's matrix multiplication
SINGLE-SOURCE SHORTEST PATHS
Introduction TO Finite Automata
Knapsack problem
Analysis of algorithm
Strassen's matrix multiplication
DESIGN AND ANALYSIS OF ALGORITHMS
Single source Shortest path algorithm with example
Computer Graphic - Lines, Circles and Ellipse
Ad

Similar to 5.3 dynamic programming (20)

PDF
Sienna 10 dynamic
PPTX
lec04.pptx
PDF
lec5_annotated.pdf ml csci 567 vatsal sharan
PPT
5.3 dynamic programming 03
PPT
Dynamic1
PPT
Dynamic Programming for 4th sem cse students
PDF
Distributed Resilient Interval Observers for Bounded-Error LTI Systems Subjec...
PPTX
Dynamic Programming.pptx
PPT
Learn about dynamic programming and how to design algorith
PDF
Open GL 04 linealgos
PPTX
Convolutional Neural Network (CNN) presentation from theory to code in Theano
PPTX
Signals and Systems Homework Help.pptx
PPT
daa_notes_of_backtracking_branchandbound.ppt
PDF
Embedding and np-Complete Problems for 3-Equitable Graphs
PPT
Free video lectures for mca
PPTX
Python Homework Help
PPT
ERK_SRU_ch08-2019-03-27.ppt discussion in class room
PDF
Lecture_DynamicProgramming test12345.pdf
PPTX
Arithmetic progressions - Poblem based Arithmetic progressions
PDF
Sienna 10 dynamic
lec04.pptx
lec5_annotated.pdf ml csci 567 vatsal sharan
5.3 dynamic programming 03
Dynamic1
Dynamic Programming for 4th sem cse students
Distributed Resilient Interval Observers for Bounded-Error LTI Systems Subjec...
Dynamic Programming.pptx
Learn about dynamic programming and how to design algorith
Open GL 04 linealgos
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Signals and Systems Homework Help.pptx
daa_notes_of_backtracking_branchandbound.ppt
Embedding and np-Complete Problems for 3-Equitable Graphs
Free video lectures for mca
Python Homework Help
ERK_SRU_ch08-2019-03-27.ppt discussion in class room
Lecture_DynamicProgramming test12345.pdf
Arithmetic progressions - Poblem based Arithmetic progressions
Ad

More from Krish_ver2 (20)

PPT
5.5 back tracking
PPT
5.5 back track
PPT
5.5 back tracking 02
PPT
5.4 randomized datastructures
PPT
5.4 randomized datastructures
PPT
5.4 randamized algorithm
PPT
5.3 dyn algo-i
PPT
5.2 divede and conquer 03
PPT
5.2 divide and conquer
PPT
5.2 divede and conquer 03
PPT
5.1 greedyyy 02
PPT
5.1 greedy
PPT
5.1 greedy 03
PPT
4.4 hashing02
PPT
4.4 hashing
PPT
4.4 hashing ext
PPT
4.4 external hashing
PPT
4.2 bst
PPT
4.2 bst 03
PPT
4.2 bst 02
5.5 back tracking
5.5 back track
5.5 back tracking 02
5.4 randomized datastructures
5.4 randomized datastructures
5.4 randamized algorithm
5.3 dyn algo-i
5.2 divede and conquer 03
5.2 divide and conquer
5.2 divede and conquer 03
5.1 greedyyy 02
5.1 greedy
5.1 greedy 03
4.4 hashing02
4.4 hashing
4.4 hashing ext
4.4 external hashing
4.2 bst
4.2 bst 03
4.2 bst 02

Recently uploaded (20)

PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPTX
master seminar digital applications in india
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
Cell Types and Its function , kingdom of life
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Pharma ospi slides which help in ospi learning
PDF
Classroom Observation Tools for Teachers
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Complications of Minimal Access Surgery at WLH
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
Pharmacology of Heart Failure /Pharmacotherapy of CHF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
master seminar digital applications in india
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Cell Types and Its function , kingdom of life
Final Presentation General Medicine 03-08-2024.pptx
Abdominal Access Techniques with Prof. Dr. R K Mishra
STATICS OF THE RIGID BODIES Hibbelers.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Pharma ospi slides which help in ospi learning
Classroom Observation Tools for Teachers
VCE English Exam - Section C Student Revision Booklet
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
O7-L3 Supply Chain Operations - ICLT Program
Complications of Minimal Access Surgery at WLH
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
human mycosis Human fungal infections are called human mycosis..pptx
Module 4: Burden of Disease Tutorial Slides S2 2025
O5-L3 Freight Transport Ops (International) V1.pdf

5.3 dynamic programming

  • 1. Chapter 8Chapter 8 Dynamic ProgrammingDynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
  • 2. 8-2Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Dynamic ProgrammingDynamic Programming DDynamic Programmingynamic Programming is a general algorithm design techniqueis a general algorithm design technique for solving problems defined by or formulated as recurrencesfor solving problems defined by or formulated as recurrences with overlapping subinstanceswith overlapping subinstances • Invented by American mathematician Richard Bellman in theInvented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS1950s to solve optimization problems and later assimilated by CS • ““Programming” here means “planning”Programming” here means “planning” • Main idea:Main idea: - set up a recurrence relating a solution to a larger instanceset up a recurrence relating a solution to a larger instance to solutions of some smaller instancesto solutions of some smaller instances - solve smaller instances once- solve smaller instances once - record solutions in a tablerecord solutions in a table - extract solution to the initial instance from that tableextract solution to the initial instance from that table
  • 3. 8-3Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Example: Fibonacci numbersExample: Fibonacci numbers • Recall definition of Fibonacci numbers:Recall definition of Fibonacci numbers: FF((nn)) = F= F((nn-1)-1) + F+ F((nn-2)-2) FF(0)(0) == 00 FF(1)(1) == 11 • Computing theComputing the nnthth Fibonacci number recursively (top-down):Fibonacci number recursively (top-down): FF((nn)) FF((n-n-1)1) + F+ F((n-n-2)2) FF((n-n-2)2) + F+ F((n-n-3)3) FF((n-n-3)3) + F+ F((n-n-4)4) ......
  • 4. 8-4Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Example: Fibonacci numbers (cont.)Example: Fibonacci numbers (cont.) Computing theComputing the nnthth Fibonacci number using bottom-up iteration andFibonacci number using bottom-up iteration and recording results:recording results: FF(0)(0) == 00 FF(1)(1) == 11 FF(2)(2) == 1+0 = 11+0 = 1 …… FF((nn-2) =-2) = FF((nn-1) =-1) = FF((nn) =) = FF((nn-1)-1) + F+ F((nn-2)-2) Efficiency:Efficiency: - time- time - space- space 0 1 1 . . . F(n-2) F(n-1) F(n) n n What if we solve it recursively?
  • 5. 8-5Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Examples of DP algorithmsExamples of DP algorithms • Computing a binomial coefficientComputing a binomial coefficient • Longest common subsequenceLongest common subsequence • Warshall’s algorithm for transitive closureWarshall’s algorithm for transitive closure • Floyd’s algorithm for all-pairs shortest pathsFloyd’s algorithm for all-pairs shortest paths • Constructing an optimal binary search treeConstructing an optimal binary search tree • Some instances of difficult discrete optimization problems:Some instances of difficult discrete optimization problems: - traveling salesman- traveling salesman - knapsack- knapsack
  • 6. 8-6Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Computing a binomial coefficient by DPComputing a binomial coefficient by DP Binomial coefficients are coefficients of the binomial formula:Binomial coefficients are coefficients of the binomial formula: ((a + ba + b))nn == CC((nn,0),0)aann bb00 + . . . ++ . . . + CC((nn,,kk))aan-kn-k bbkk + . . . +. . . + CC((nn,,nn))aa00 bbnn Recurrence:Recurrence: CC((nn,,kk) =) = CC((n-n-1,1,kk) +) + CC((nn-1,-1,kk-1) for-1) for n > kn > k > 0> 0 CC((nn,0) = 1,,0) = 1, CC((nn,,nn) = 1 for) = 1 for nn ≥≥ 00 Value ofValue of CC((nn,,kk) can be computed by filling a table:) can be computed by filling a table: 0 1 2 . . .0 1 2 . . . kk-1-1 kk 0 10 1 1 1 11 1 1 .. .. .. n-n-11 CC((n-n-1,1,kk-1)-1) CC((n-n-1,1,kk)) nn CC((nn,,kk))
  • 7. 8-7Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 ComputingComputing CC((n,kn,k): pseudocode and analysis): pseudocode and analysis Time efficiency:Time efficiency: ΘΘ((nknk)) Space efficiency:Space efficiency: ΘΘ((nknk))
  • 8. 8-8Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Knapsack Problem by DPKnapsack Problem by DP GivenGiven nn items ofitems of integer weights:integer weights: ww11 ww22 … w… wnn values:values: vv11 vv22 … v… vnn a knapsack of integer capacitya knapsack of integer capacity WW find most valuable subset of the items that fit into the knapsackfind most valuable subset of the items that fit into the knapsack Consider instance defined by firstConsider instance defined by first ii items and capacityitems and capacity jj ((jj ≤≤ WW)).. LetLet VV[[ii,,jj] be optimal value of such an instance. Then] be optimal value of such an instance. Then max {max {VV[[ii-1,-1,jj],], vvii ++ VV[[ii-1,-1,j-j- wwii]} if]} if j-j- wwii ≥≥ 00 VV[[ii,,jj] =] = VV[[ii-1,-1,jj] if] if j-j- wwii < 0< 0 Initial conditions:Initial conditions: VV[0,[0,jj] = 0 and] = 0 and VV[[ii,0] = 0,0] = 0 {
  • 9. 8-9Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Knapsack Problem by DP (example)Knapsack Problem by DP (example) Example: Knapsack of capacityExample: Knapsack of capacity WW = 5= 5 item weight valueitem weight value 1 2 $121 2 $12 2 1 $102 1 $10 3 3 $203 3 $20 4 2 $15 capacity4 2 $15 capacity jj 0 1 2 3 40 1 2 3 4 55 00 ww11 = 2,= 2, vv11==12 112 1 ww22 = 1,= 1, vv22==10 210 2 ww33 = 3,= 3, vv33==20 320 3 ww44 = 2,= 2, vv44==15 415 4 ?? 0 0 0 0 0 12 0 10 12 22 22 22 0 10 12 22 30 32 0 10 15 25 30 37 Backtracing finds the actual optimal subset, i.e. solution.
  • 10. 8-10Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Knapsack Problem by DP (pseudocode)Knapsack Problem by DP (pseudocode) Algorithm DPKnapsack(Algorithm DPKnapsack(ww[[11....nn],], vv[[1..n1..n],], WW)) varvar VV[[0..n,0..W0..n,0..W]], P, P[[1..n,1..W1..n,1..W]]:: intint forfor j := 0j := 0 toto WW dodo VV[[0,j0,j] :=] := 00 forfor i := 0i := 0 toto nn dodo VV[[i,0i,0]] := 0:= 0 forfor i := 1i := 1 toto nn dodo forfor j := 1j := 1 toto WW dodo ifif ww[[ii]] ≤≤ jj andand vv[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]] > V> V[[i-1,ji-1,j] then] then VV[[i,ji,j]] := v:= v[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]]; P; P[[i,ji,j]] := j-w:= j-w[[ii]] elseelse VV[[i,ji,j]] := V:= V[[i-1,ji-1,j]]; P; P[[i,ji,j]] := j:= j returnreturn VV[[n,Wn,W] and the optimal subset by backtracing] and the optimal subset by backtracing Running time and space: O(nW).
  • 11. 8-11Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Longest Common Subsequence (LCS)Longest Common Subsequence (LCS) A subsequence of a sequence/stringA subsequence of a sequence/string SS is obtained byis obtained by deleting zero or more symbols fromdeleting zero or more symbols from SS. For example, the. For example, the following arefollowing are somesome subsequences of “president”: pred, sdn,subsequences of “president”: pred, sdn, predent. In other words, the letters of a subsequence of Spredent. In other words, the letters of a subsequence of S appear in order inappear in order in SS, but they are not required to be, but they are not required to be consecutive.consecutive. The longest common subsequence problem is to find aThe longest common subsequence problem is to find a maximum length common subsequence between twomaximum length common subsequence between two sequences.sequences.
  • 12. 8-12Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 LCSLCS For instance,For instance, Sequence 1: presidentSequence 1: president Sequence 2: providenceSequence 2: providence Its LCS is priden.Its LCS is priden. president providence
  • 13. 8-13Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 LCSLCS Another example:Another example: Sequence 1: algorithmSequence 1: algorithm Sequence 2: alignmentSequence 2: alignment One of its LCS is algm.One of its LCS is algm. a l g o r i t h m a l i g n m e n t
  • 14. 8-14Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 How to compute LCS?How to compute LCS? Let ALet A=a=a11aa22…a…amm andand B=bB=b11bb22…b…bnn .. lenlen((i, ji, j): the length of an LCS between): the length of an LCS between aa11aa22…a…aii andand bb11bb22…b…bjj With proper initializations,With proper initializations, lenlen((i, ji, j) can be computed as follows.) can be computed as follows. , .and0,if)),1(),1,(max( and0,if1)1,1( ,0or0if0 ),(      ≠>−− =>+−− == = ji ji bajijilenjilen bajijilen ji jilen
  • 15. 8-15Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 procedure LCS-Length(A, B) 1. for i ← 0 to m dolen(i,0) = 0 2. for j ← 1 to n dolen(0,j) = 0 3. for i ← 1 to m do 4. for j ← 1 to n do 5. if ji ba = then    = +−−= ""),( 1)1,1(),( jiprev jilenjilen 6. else if )1,(),1( −≥− jilenjilen 7. then    = −= ""),( ),1(),( jiprev jilenjilen 8. else    = −= ""),( )1,(),( jiprev jilenjilen 9. return len and prev
  • 16. 8-16Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 i j 0 1 p 2 r 3 o 4 v 5 i 6 d 7 e 8 n 9 c 10 e 0 0 0 0 0 0 0 0 0 0 0 0 1 p 2 0 1 1 1 1 1 1 1 1 1 1 2 r 0 1 2 2 2 2 2 2 2 2 2 3 e 0 1 2 2 2 2 2 3 3 3 3 4 s 0 1 2 2 2 2 2 3 3 3 3 5 i 0 1 2 2 2 3 3 3 3 3 3 6 d 0 1 2 2 2 3 4 4 4 4 4 7 e 0 1 2 2 2 3 4 5 5 5 5 8 n 0 1 2 2 2 3 4 5 6 6 6 9 t 0 1 2 2 2 3 4 5 6 6 6 Running time and memory: O(mn) and O(mn).
  • 17. 8-17Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 procedure Output-LCS(A, prev, i, j) 1 if i = 0 or j = 0 then return 2 if prev(i, j)=” “ then    −−− ia jiprevALCSOutput print )1,1,,( 3 else if prev(i, j)=” “ then Output-LCS(A, prev, i-1, j) 4 else Output-LCS(A, prev, i, j-1) The backtracing algorithm
  • 18. 8-18Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 i j 0 1 p 2 r 3 o 4 v 5 i 6 d 7 e 8 n 9 c 10 e 0 0 0 0 0 0 0 0 0 0 0 0 1 p 2 0 1 1 1 1 1 1 1 1 1 1 2 r 0 1 2 2 2 2 2 2 2 2 2 3 e 0 1 2 2 2 2 2 3 3 3 3 4 s 0 1 2 2 2 2 2 3 3 3 3 5 i 0 1 2 2 2 3 3 3 3 3 3 6 d 0 1 2 2 2 3 4 4 4 4 4 7 e 0 1 2 2 2 3 4 5 5 5 5 8 n 0 1 2 2 2 3 4 5 6 6 6 9 t 0 1 2 2 2 3 4 5 6 6 6 Output: priden
  • 19. 8-19Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm: Transitive ClosureWarshall’s Algorithm: Transitive Closure • Computes the transitive closure of a relationComputes the transitive closure of a relation • Alternatively: existence of all nontrivial paths in a digraphAlternatively: existence of all nontrivial paths in a digraph • Example of transitive closure:Example of transitive closure: 3 4 2 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 1 11 1 1 0 0 0 0 11 1 1 11 1 3 4 2 1
  • 20. 8-20Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s AlgorithmWarshall’s Algorithm Constructs transitive closureConstructs transitive closure TT as the last matrix in the sequenceas the last matrix in the sequence ofof nn-by--by-nn matricesmatrices RR(0)(0) , … ,, … , RR((kk)) , … ,, … , RR((nn)) wherewhere RR((kk)) [[ii,,jj] = 1 iff there is nontrivial path from] = 1 iff there is nontrivial path from ii toto jj with only thewith only the firstfirst kk vertices allowed as intermediatevertices allowed as intermediate Note thatNote that RR(0)(0) == AA (adjacency matrix)(adjacency matrix),, RR((nn)) = T= T (transitive closure)(transitive closure) 3 42 1 3 42 1 3 42 1 3 42 1 R(0) 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R(1) 0 0 1 0 1 0 11 1 0 0 0 0 0 1 0 0 R(2) 0 0 1 0 1 0 1 1 0 0 0 0 11 1 1 11 1 R(3) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(4) 0 0 1 0 1 11 1 1 0 0 0 0 1 1 1 1 3 42 1
  • 21. 8-21Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (recurrence)Warshall’s Algorithm (recurrence) On theOn the k-k-th iteration, the algorithm determines for every pair ofth iteration, the algorithm determines for every pair of verticesvertices i, ji, j if a path exists fromif a path exists from ii andand jj with just vertices 1,…,with just vertices 1,…,kk allowedallowed asas intermediateintermediate RR((kk-1)-1) [[i,ji,j]] (path using just 1 ,…,(path using just 1 ,…,k-k-1)1) RR((kk)) [[i,ji,j] =] = oror RR((kk-1)-1) [[i,ki,k] and] and RR((kk-1)-1) [[k,jk,j]] (path from(path from ii toto kk and fromand from kk toto jj using just 1 ,…,using just 1 ,…,k-k-1)1) i j k { Initial condition?
  • 22. 8-22Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (matrix generation)Warshall’s Algorithm (matrix generation) Recurrence relating elementsRecurrence relating elements RR((kk)) to elements ofto elements of RR((kk-1)-1) is:is: RR((kk)) [[i,ji,j] =] = RR((kk-1)-1) [[i,ji,j] or] or ((RR((kk-1)-1) [[i,ki,k] and] and RR((kk-1)-1) [[k,jk,j])]) It implies the following rules for generatingIt implies the following rules for generating RR((kk)) fromfrom RR((kk-1)-1) :: Rule 1Rule 1 If an element in rowIf an element in row ii and columnand column jj is 1 inis 1 in RR((k-k-1)1) ,, it remains 1 init remains 1 in RR((kk)) Rule 2Rule 2 If an element in rowIf an element in row ii and columnand column jj is 0 inis 0 in RR((k-k-1)1) ,, it has to be changed to 1 init has to be changed to 1 in RR((kk)) if and only ifif and only if the element in its rowthe element in its row ii and columnand column kk and the elementand the element in its columnin its column jj and rowand row kk are both 1’s inare both 1’s in RR((k-k-1)1)
  • 23. 8-23Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (example)Warshall’s Algorithm (example) 3 42 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R(0) = 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 R(1) = 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(2) = 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(3) = 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 R(4) =
  • 24. 8-24Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (pseudocode and analysis)Warshall’s Algorithm (pseudocode and analysis) Time efficiency:Time efficiency: ΘΘ((nn33 )) Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors (with some care), so it’s(with some care), so it’s ΘΘ((nn^2).^2).
  • 25. 8-25Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm: All pairs shortest pathsFloyd’s Algorithm: All pairs shortest paths Problem: In a weighted (di)graph, find shortest paths betweenProblem: In a weighted (di)graph, find shortest paths between every pair of verticesevery pair of vertices Same idea: construct solution through series of matricesSame idea: construct solution through series of matrices DD(0)(0) , …,, …, DD ((nn)) using increasing subsets of the vertices allowedusing increasing subsets of the vertices allowed as intermediateas intermediate Example:Example: 3 4 2 1 4 1 6 1 5 3 0 ∞ 4 ∞ 1 0 4 3 ∞ ∞ 0 ∞ 6 5 1 0
  • 26. 8-26Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm (matrix generation)Floyd’s Algorithm (matrix generation) On theOn the k-k-th iteration, the algorithm determines shortest pathsth iteration, the algorithm determines shortest paths between every pair of verticesbetween every pair of vertices i, ji, j that use only vertices among 1,that use only vertices among 1, …,…,kk as intermediateas intermediate DD((kk)) [[i,ji,j] = min {] = min {DD((kk-1)-1) [[i,ji,j],], DD((kk-1)-1) [[i,ki,k] +] + DD((kk-1)-1) [[k,jk,j]}]} i j k DD((kk-1)-1) [[i,ji,j]] DD((kk-1)-1) [[i,ki,k]] DD((kk-1)-1) [[k,jk,j]] Initial condition?
  • 27. 8-27Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm (example)Floyd’s Algorithm (example) 0 ∞ 3 ∞ 2 0 ∞ ∞ ∞ 7 0 1 6 ∞ ∞ 0 D(0) = 0 ∞ 3 ∞ 2 0 5 ∞ ∞ 7 0 1 6 ∞ 9 0 D(1) = 0 ∞ 3 ∞ 2 0 5 ∞ 9 7 0 1 6 ∞ 9 0 D(2) = 0 10 3 4 2 0 5 6 9 7 0 1 6 16 9 0 D(3) = 0 10 3 4 2 0 5 6 7 7 0 1 6 16 9 0 D(4) = 3 1 3 2 6 7 4 1 2
  • 28. 8-28Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm (pseudocode and analysis)Floyd’s Algorithm (pseudocode and analysis) Time efficiency:Time efficiency: ΘΘ((nn33 )) Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors Note: Works on graphs with negative edges but without negative cycles.Note: Works on graphs with negative edges but without negative cycles. Shortest paths themselves can be found, too.Shortest paths themselves can be found, too. How?How? If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k Since the superscripts k or k-1 make no difference to D[i,k] and D[k,j].
  • 29. 8-29Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Optimal Binary Search TreesOptimal Binary Search Trees Problem: GivenProblem: Given nn keyskeys aa11 < …<< …< aann and probabilitiesand probabilities pp11,, …,…, ppnn searching for them, find a BST with a minimumsearching for them, find a BST with a minimum average number of comparisons in successful search.average number of comparisons in successful search. Since total number of BSTs withSince total number of BSTs with nn nodes is given by C(2nodes is given by C(2nn,,nn)/)/ ((nn+1), which grows exponentially, brute force is hopeless.+1), which grows exponentially, brute force is hopeless. Example: What is an optimal BST for keysExample: What is an optimal BST for keys AA,, BB,, CC, and, and DD withwith search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?search probabilities 0.1, 0.2, 0.4, and 0.3, respectively? D A B C Average # of comparisons = 1*0.4 + 2*(0.2+0.3) + 3*0.1 = 1.7
  • 30. 8-30Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 DP for Optimal BST ProblemDP for Optimal BST Problem LetLet CC[[i,ji,j] be minimum average number of comparisons made in] be minimum average number of comparisons made in T[T[i,ji,j], optimal BST for keys], optimal BST for keys aaii < …<< …< aajj ,, where 1 ≤where 1 ≤ ii ≤≤ jj ≤≤ n.n. Consider optimal BST among all BSTs with someConsider optimal BST among all BSTs with some aakk ((ii ≤≤ kk ≤≤ jj )) as their root; T[as their root; T[i,ji,j] is the best among them.] is the best among them. a Optimal BST for a , ..., a Optimal BST for a , ..., ai k k-1 k+1 j CC[[i,ji,j] =] = min {min {ppkk ·· 1 +1 + ∑∑ ppss (level(level aass in T[in T[i,k-i,k-1] +1)1] +1) ++ ∑∑ ppss (level(level aass in T[in T[k+k+11,j,j] +1)}] +1)} ii ≤≤ kk ≤≤ jj ss == ii k-k-11 s =s =k+k+11 jj
  • 31. 8-31Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 goal0 0 C[i,j] 0 1 n+1 0 1 n p 1 p2 np i j DP for Optimal BST Problem (cont.)DP for Optimal BST Problem (cont.) After simplifications, we obtain the recurrence forAfter simplifications, we obtain the recurrence for CC[[i,ji,j]:]: CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ ppss forfor 11 ≤≤ ii ≤≤ jj ≤≤ nn CC[[i,ii,i] =] = ppii for 1for 1 ≤≤ ii ≤≤ jj ≤≤ nn ss == ii jj ii ≤≤ kk ≤≤ jj
  • 32. Example: keyExample: key A B C DA B C D probability 0.1 0.2 0.4 0.3probability 0.1 0.2 0.4 0.3 The tables below are filled diagonal by diagonal: the left one is filledThe tables below are filled diagonal by diagonal: the left one is filled using the recurrenceusing the recurrence CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ pps ,s , CC[[i,ii,i] =] = ppii ;; the right one, for trees’ roots, recordsthe right one, for trees’ roots, records kk’s values giving the minima’s values giving the minima 00 11 22 33 44 11 00 .1.1 .4.4 1.11.1 1.71.7 22 00 .2.2 .8.8 1.41.4 33 00 .4.4 1.01.0 44 00 .3.3 55 00 00 11 22 33 44 11 11 22 33 33 22 22 33 33 33 33 33 44 44 55 ii ≤≤ kk ≤≤ jj ss == ii jj optimal BSToptimal BST B A C D ii jj ii jj
  • 33. 8-33Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Optimal Binary Search TreesOptimal Binary Search Trees
  • 34. 8-34Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Analysis DP for Optimal BST ProblemAnalysis DP for Optimal BST Problem Time efficiency:Time efficiency: ΘΘ((nn33 ) but can be reduced to) but can be reduced to ΘΘ((nn22 )) by takingby taking advantage of monotonicity of entries in theadvantage of monotonicity of entries in the root table, i.e.,root table, i.e., RR[[i,ji,j] is always in the range] is always in the range betweenbetween RR[[i,ji,j-1] and R[-1] and R[ii+1,j]+1,j] Space efficiency:Space efficiency: ΘΘ((nn22 )) Method can be expanded to include unsuccessful searchesMethod can be expanded to include unsuccessful searches