SlideShare a Scribd company logo
Analysis of Algorithms
CS 477/677
Dynamic Programming
Instructor: George Bebis
(Chapter 15)
2
Dynamic Programming
• An algorithm design technique (like divide and
conquer)
• Divide and conquer
– Partition the problem into independent subproblems
– Solve the subproblems recursively
– Combine the solutions to solve the original problem
3
Dynamic Programming
• Applicable when subproblems are not independent
– Subproblems share subsubproblems
E.g.: Combinations:
– A divide and conquer approach would repeatedly solve the
common subproblems
– Dynamic programming solves every subproblem just once and
stores the answer in a table
n
k
n-1
k
n-1
k-1
= +
n
1
n
n
=1 =1
4
Example: Combinations
+
=
=
=
=
=
=
+ +
+ + + +
+
+ + + + +
+
+
+
+
+ + +
+ + + + + + +
+ +
+
+
+
+
+
+
+
+
+
3
3
Comb (3, 1)
2
Comb (2, 1)
1
Comb (2, 2)
Comb (3, 2)
Comb (4,2)
2
Comb (2, 1)
1
Comb (2, 2)
Comb (3, 2)
1
1
Comb (3, 3)
Comb (4, 3)
Comb (5, 3)
2
Comb (2, 1)
1
Comb (2, 2)
Comb (3, 2)
1
1
Comb (3, 3)
Comb (4, 3)
1
1
1
Comb (4, 4)
Comb (5, 4)
Comb (6,4)
n
k
n-1
k
n-1
k-1
= +
5
Dynamic Programming
• Used for optimization problems
– A set of choices must be made to get an optimal
solution
– Find a solution with the optimal value (minimum or
maximum)
– There may be many solutions that lead to an optimal
value
– Our goal: find an optimal solution
6
Dynamic Programming Algorithm
1. Characterize the structure of an optimal
solution
2. Recursively define the value of an optimal
solution
3. Compute the value of an optimal solution in a
bottom-up fashion
4. Construct an optimal solution from computed
information (not always necessary)
7
Assembly Line Scheduling
• Automobile factory with two assembly lines
– Each line has n stations: S1,1, . . . , S1,n and S2,1, . . . , S2,n
– Corresponding stations S1, j and S2, j perform the same function
but can take different amounts of time a1, j and a2, j
– Entry times are: e1 and e2; exit times are: x1 and x2
8
Assembly Line Scheduling
• After going through a station, can either:
– stay on same line at no cost, or
– transfer to other line: cost after Si,j is ti,j , j = 1, . . . , n - 1
9
Assembly Line Scheduling
• Problem:
what stations should be chosen from line 1 and which
from line 2 in order to minimize the total time through the
factory for one car?
10
One Solution
• Brute force
– Enumerate all possibilities of selecting stations
– Compute how long it takes in each case and choose
the best one
• Solution:
– There are 2n possible ways to choose stations
– Infeasible when n is large!!
1 0 0 1 1
1 if choosing line 1
at step j (= n)
1 2 3 4 n
0 if choosing line 2
at step j (= 3)
11
1. Structure of the Optimal Solution
• How do we compute the minimum time of going through
a station?
12
1. Structure of the Optimal Solution
• Let’s consider all possible ways to get from the
starting point through station S1,j
– We have two choices of how to get to S1, j:
• Through S1, j - 1, then directly to S1, j
• Through S2, j - 1, then transfer over to S1, j
a1,j
a1,j-1
a2,j-1
t2,j-1
S1,j
S1,j-1
S2,j-1
Line 1
Line 2
13
1. Structure of the Optimal Solution
• Suppose that the fastest way through S1, j is
through S1, j – 1
– We must have taken a fastest way from entry through S1, j – 1
– If there were a faster way through S1, j - 1, we would use it instead
• Similarly for S2, j – 1
a1,j
a1,j-1
a2,j-1
t2,j-1
S1,j
S1,j-1
S2,j-1
Optimal Substructure
Line 1
Line 2
14
Optimal Substructure
• Generalization: an optimal solution to the
problem “find the fastest way through S1, j” contains
within it an optimal solution to subproblems: “find
the fastest way through S1, j - 1 or S2, j – 1”.
• This is referred to as the optimal substructure
property
• We use this property to construct an optimal
solution to a problem from optimal solutions to
subproblems
15
2. A Recursive Solution
• Define the value of an optimal solution in terms of the optimal
solution to subproblems
16
2. A Recursive Solution (cont.)
• Definitions:
– f* : the fastest time to get through the entire factory
– fi[j] : the fastest time to get from the starting point through station Si,j
f* = min (f1[n] + x1, f2[n] + x2)
17
2. A Recursive Solution (cont.)
• Base case: j = 1, i=1,2 (getting through station 1)
f1[1] = e1 + a1,1
f2[1] = e2 + a2,1
18
2. A Recursive Solution (cont.)
• General Case: j = 2, 3, …,n, and i = 1, 2
• Fastest way through S1, j is either:
– the way through S1, j - 1 then directly through S1, j, or
f1[j - 1] + a1,j
– the way through S2, j - 1, transfer from line 2 to line 1, then through S1, j
f2[j -1] + t2,j-1 + a1,j
f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j)
a1,j
a1,j-1
a2,j-1
t2,j-1
S1,j
S1,j-1
S2,j-1
Line 1
Line 2
19
2. A Recursive Solution (cont.)
e1 + a1,1 if j = 1
f1[j] =
min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2
e2 + a2,1 if j = 1
f2[j] =
min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j) if j ≥ 2
20
3. Computing the Optimal Solution
f* = min (f1[n] + x1, f2[n] + x2)
f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j)
f2[j] = min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j)
• Solving top-down would result in exponential
running time
f1[j]
f2[j]
1 2 3 4 5
f1(5)
f2(5)
f1(4)
f2(4)
f1(3)
f2(3)
2 times
4 times
f1(2)
f2(2)
f1(1)
f2(1)
21
3. Computing the Optimal Solution
• For j ≥ 2, each value fi[j] depends only on the
values of f1[j – 1] and f2[j - 1]
• Idea: compute the values of fi[j] as follows:
• Bottom-up approach
– First find optimal solutions to subproblems
– Find an optimal solution to the problem from the
subproblems
f1[j]
f2[j]
1 2 3 4 5
in increasing order of j
22
Example
e1 + a1,1, if j = 1
f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2
f* = 35[1]
f1[j]
f2[j]
1 2 3 4 5
9
12 16[1]
18[1] 20[2]
22[2]
24[1]
25[1]
32[1]
30[2]
23
FASTEST-WAY(a, t, e, x, n)
1. f1[1] ← e1 + a1,1
2. f2[1] ← e2 + a2,1
3. for j ← 2 to n
4. do if f1[j - 1] + a1,j ≤ f2[j - 1] + t2, j-1 + a1, j
5. then f1[j] ← f1[j - 1] + a1, j
6. l1[j] ← 1
7. else f1[j] ← f2[j - 1] + t2, j-1 + a1, j
8. l1[j] ← 2
9. if f2[j - 1] + a2, j ≤ f1[j - 1] + t1, j-1 + a2, j
10. then f2[j] ← f2[j - 1] + a2, j
11. l2[j] ← 2
12. else f2[j] ← f1[j - 1] + t1, j-1 + a2, j
13. l2[j] ← 1
Compute initial values of f1 and f2
Compute the values of
f1[j] and l1[j]
Compute the values of
f2[j] and l2[j]
O(N)
24
FASTEST-WAY(a, t, e, x, n) (cont.)
14. if f1[n] + x1 ≤ f2[n] + x2
15. then f* = f1[n] + x1
16. l* = 1
17. else f* = f2[n] + x2
18. l* = 2
Compute the values of
the fastest time through the
entire factory
25
4. Construct an Optimal Solution
Alg.: PRINT-STATIONS(l, n)
i ← l*
print “line ” i “, station ” n
for j ← n downto 2
do i ←li[j]
print “line ” i “, station ” j - 1
f1[j]/l1[j]
f2[j]/l2[j]
1 2 3 4 5
9
12 16[1]
18[1] 20[2]
22[2]
24[1]
25[1]
32[1]
30[2]
l* = 1
26
Matrix-Chain Multiplication
Problem: given a sequence A1, A2, …, An,
compute the product:
A1  A2  An
• Matrix compatibility:
C = A  B C=A1  A2  Ai  Ai+1  An
colA = rowB coli = rowi+1
rowC = rowA rowC = rowA1
colC = colB colC = colAn
27
MATRIX-MULTIPLY(A, B)
if columns[A]  rows[B]
then error “incompatible dimensions”
else for i  1 to rows[A]
do for j  1 to columns[B]
do C[i, j] = 0
for k  1 to columns[A]
do C[i, j]  C[i, j] + A[i, k] B[k, j]
rows[A]
rows[A]
cols[B]
cols[B]
i
j
j
i
A B C
* =
k
k
rows[A]  cols[A]  cols[B]
multiplications
28
Matrix-Chain Multiplication
• In what order should we multiply the matrices?
A1  A2  An
• Parenthesize the product to get the order in which
matrices are multiplied
• E.g.: A1  A2  A3 = ((A1  A2)  A3)
= (A1  (A2  A3))
• Which one of these orderings should we choose?
– The order in which we multiply the matrices has a
significant impact on the cost of evaluating the product
29
Example
A1  A2  A3
• A1: 10 x 100
• A2: 100 x 5
• A3: 5 x 50
1. ((A1  A2)  A3): A1  A2 = 10 x 100 x 5 = 5,000 (10 x 5)
((A1  A2)  A3) = 10 x 5 x 50 = 2,500
Total: 7,500 scalar multiplications
2. (A1  (A2  A3)): A2  A3 = 100 x 5 x 50 = 25,000 (100 x 50)
(A1  (A2  A3)) = 10 x 100 x 50 = 50,000
Total: 75,000 scalar multiplications
one order of magnitude difference!!
30
Matrix-Chain Multiplication:
Problem Statement
• Given a chain of matrices A1, A2, …, An, where
Ai has dimensions pi-1x pi, fully parenthesize the
product A1  A2  An in a way that minimizes the
number of scalar multiplications.
A1  A2  Ai  Ai+1  An
p0 x p1 p1 x p2 pi-1 x pi pi x pi+1 pn-1 x pn
31
What is the number of possible
parenthesizations?
• Exhaustively checking all possible
parenthesizations is not efficient!
• It can be shown that the number of
parenthesizations grows as Ω(4n/n3/2)
(see page 333 in your textbook)
32
1. The Structure of an Optimal
Parenthesization
• Notation:
Ai…j = Ai Ai+1  Aj, i  j
• Suppose that an optimal parenthesization of Ai…j
splits the product between Ak and Ak+1, where
i  k < j
Ai…j = Ai Ai+1  Aj
= Ai Ai+1  Ak Ak+1  Aj
= Ai…k Ak+1…j
33
Optimal Substructure
Ai…j = Ai…k Ak+1…j
• The parenthesization of the “prefix” Ai…k must be an
optimal parentesization
• If there were a less costly way to parenthesize Ai…k, we
could substitute that one in the parenthesization of Ai…j
and produce a parenthesization with a lower cost than
the optimum  contradiction!
• An optimal solution to an instance of the matrix-chain
multiplication contains within it optimal solutions to
subproblems
34
2. A Recursive Solution
• Subproblem:
determine the minimum cost of parenthesizing
Ai…j = Ai Ai+1  Aj for 1  i  j  n
• Let m[i, j] = the minimum number of
multiplications needed to compute Ai…j
– full problem (A1..n): m[1, n]
– i = j: Ai…i = Ai  m[i, i] = 0, for i = 1, 2, …, n
35
2. A Recursive Solution
• Consider the subproblem of parenthesizing
Ai…j = Ai Ai+1  Aj for 1  i  j  n
= Ai…k Ak+1…j for i  k < j
• Assume that the optimal parenthesization splits
the product Ai Ai+1  Aj at k (i  k < j)
m[i, j] =
min # of multiplications
to compute Ai…k
# of multiplications
to compute Ai…kAk…j
min # of multiplications
to compute Ak+1…j
m[i, k] m[k+1,j]
pi-1pkpj
m[i, k] + m[k+1, j] + pi-1pkpj
36
2. A Recursive Solution (cont.)
m[i, j] = m[i, k] + m[k+1, j] + pi-1pkpj
• We do not know the value of k
– There are j – i possible values for k: k = i, i+1, …, j-1
• Minimizing the cost of parenthesizing the product
Ai Ai+1  Aj becomes:
0 if i = j
m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j
ik<j
37
3. Computing the Optimal Costs
0 if i = j
m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j
ik<j
• Computing the optimal solution recursively takes
exponential time!
• How many subproblems?
– Parenthesize Ai…j
for 1  i  j  n
– One problem for each
choice of i and j
 (n2)
1
1
2 3 n
2
3
n
j
i
38
3. Computing the Optimal Costs (cont.)
0 if i = j
m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j
ik<j
• How do we fill in the tables m[1..n, 1..n]?
– Determine which entries of the table are used in computing m[i, j]
Ai…j = Ai…k Ak+1…j
– Subproblems’ size is one less than the original size
– Idea: fill in m such that it corresponds to solving problems of
increasing length
39
3. Computing the Optimal Costs (cont.)
0 if i = j
m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j
ik<j
• Length = 1: i = j, i = 1, 2, …, n
• Length = 2: j = i + 1, i = 1, 2, …, n-1
1
1
2 3 n
2
3
n
Compute rows from bottom to top
and from left to right
m[1, n] gives the optimal
solution to the problem
i
j
40
Example: min {m[i, k] + m[k+1, j] + pi-1pkpj}
m[2, 2] + m[3, 5] + p1p2p5
m[2, 3] + m[4, 5] + p1p3p5
m[2, 4] + m[5, 5] + p1p4p5
1
1
2 3 6
2
3
6
i
j
4 5
4
5
m[2, 5] = min
• Values m[i, j] depend only
on values that have been
previously computed
k = 2
k = 3
k = 4
41
Example min {m[i, k] + m[k+1, j] + pi-1pkpj}
Compute A1  A2  A3
• A1: 10 x 100 (p0 x p1)
• A2: 100 x 5 (p1 x p2)
• A3: 5 x 50 (p2 x p3)
m[i, i] = 0 for i = 1, 2, 3
m[1, 2] = m[1, 1] + m[2, 2] + p0p1p2 (A1A2)
= 0 + 0 + 10 *100* 5 = 5,000
m[2, 3] = m[2, 2] + m[3, 3] + p1p2p3 (A2A3)
= 0 + 0 + 100 * 5 * 50 = 25,000
m[1, 3] = min m[1, 1] + m[2, 3] + p0p1p3 = 75,000 (A1(A2A3))
m[1, 2] + m[3, 3] + p0p2p3 = 7,500 ((A1A2)A3)
0
0
0
1
1
2
2
3
3
5000
1
25000
2
7500
2
42
Matrix-Chain-Order(p)
O(N3)
43
4. Construct the Optimal Solution
• In a similar matrix s we
keep the optimal
values of k
• s[i, j] = a value of k
such that an optimal
parenthesization of
Ai..j splits the product
between Ak and Ak+1
k
1
1
2 3 n
2
3
n
j
44
4. Construct the Optimal Solution
• s[1, n] is associated with
the entire product A1..n
– The final matrix
multiplication will be split
at k = s[1, n]
A1..n = A1..s[1, n]  As[1, n]+1..n
– For each subproduct
recursively find the
corresponding value of k
that results in an optimal
parenthesization
1
1
2 3 n
2
3
n
j
45
4. Construct the Optimal Solution
• s[i, j] = value of k such that the optimal
parenthesization of Ai Ai+1  Aj splits the
product between Ak and Ak+1
3 3 3 5 5 -
3 3 3 4 -
3 3 3 -
1 2 -
1 -
-
1
1
2 3 6
2
3
6
i
j
4 5
4
5
• s[1, n] = 3  A1..6 = A1..3 A4..6
• s[1, 3] = 1  A1..3 = A1..1 A2..3
• s[4, 6] = 5  A4..6 = A4..5 A6..6
46
4. Construct the Optimal Solution (cont.)
3 3 3 5 5 -
3 3 3 4 -
3 3 3 -
1 2 -
1 -
-
1
1
2 3 6
2
3
6
i
j
4 5
4
5
PRINT-OPT-PARENS(s, i, j)
if i = j
then print “A”i
else print “(”
PRINT-OPT-PARENS(s, i, s[i, j])
PRINT-OPT-PARENS(s, s[i, j] + 1, j)
print “)”
47
Example: A1  A6
3 3 3 5 5 -
3 3 3 4 -
3 3 3 -
1 2 -
1 -
-
1
1
2 3 6
2
3
6
i
j
4 5
4
5
PRINT-OPT-PARENS(s, i, j)
if i = j
then print “A”i
else print “(”
PRINT-OPT-PARENS(s, i, s[i, j])
PRINT-OPT-PARENS(s, s[i, j] + 1, j)
print “)”
P-O-P(s, 1, 6) s[1, 6] = 3
i = 1, j = 6 “(“ P-O-P (s, 1, 3) s[1, 3] = 1
i = 1, j = 3 “(“ P-O-P(s, 1, 1)  “A1”
P-O-P(s, 2, 3) s[2, 3] = 2
i = 2, j = 3 “(“ P-O-P (s, 2, 2)  “A2”
P-O-P (s, 3, 3)  “A3”
“)”
“)”
( ( ( A4 A5 ) A6 ) )
A1 ( A2 A3 ) )
…
(
s[1..6, 1..6]
48
Memoization
• Top-down approach with the efficiency of typical dynamic
programming approach
• Maintaining an entry in a table for the solution to each
subproblem
– memoize the inefficient recursive algorithm
• When a subproblem is first encountered its solution is
computed and stored in that table
• Subsequent “calls” to the subproblem simply look up that
value
49
Memoized Matrix-Chain
Alg.: MEMOIZED-MATRIX-CHAIN(p)
1. n  length[p] – 1
2. for i  1 to n
3. do for j  i to n
4. do m[i, j]  
5. return LOOKUP-CHAIN(p, 1, n)
Initialize the m table with
large values that indicate
whether the values of m[i, j]
have been computed
Top-down approach
50
Memoized Matrix-Chain
Alg.: LOOKUP-CHAIN(p, i, j)
1. if m[i, j] < 
2. then return m[i, j]
3. if i = j
4. then m[i, j]  0
5. else for k  i to j – 1
6. do q  LOOKUP-CHAIN(p, i, k) +
LOOKUP-CHAIN(p, k+1, j) + pi-1pkpj
7. if q < m[i, j]
8. then m[i, j]  q
9. return m[i, j]
Running time is O(n3)
51
Dynamic Progamming vs. Memoization
• Advantages of dynamic programming vs.
memoized algorithms
– No overhead for recursion, less overhead for
maintaining the table
– The regular pattern of table accesses may be used to
reduce time or space requirements
• Advantages of memoized algorithms vs.
dynamic programming
– Some subproblems do not need to be solved
53
Elements of Dynamic Programming
• Optimal Substructure
– An optimal solution to a problem contains within it an
optimal solution to subproblems
– Optimal solution to the entire problem is build in a
bottom-up manner from optimal solutions to
subproblems
• Overlapping Subproblems
– If a recursive algorithm revisits the same subproblems
over and over  the problem has overlapping
subproblems
54
Parameters of Optimal Substructure
• How many subproblems are used in an optimal
solution for the original problem
– Assembly line:
– Matrix multiplication:
• How many choices we have in determining
which subproblems to use in an optimal solution
– Assembly line:
– Matrix multiplication:
One subproblem (the line that gives best time)
Two choices (line 1 or line 2)
Two subproblems (subproducts Ai..k, Ak+1..j)
j - i choices for k (splitting the product)
55
Parameters of Optimal Substructure
• Intuitively, the running time of a dynamic
programming algorithm depends on two factors:
– Number of subproblems overall
– How many choices we look at for each subproblem
• Assembly line
– (n) subproblems (n stations)
– 2 choices for each subproblem
• Matrix multiplication:
– (n2) subproblems (1  i  j  n)
– At most n-1 choices
(n) overall
(n3) overall
56
Longest Common Subsequence
• Given two sequences
X = x1, x2, …, xm
Y = y1, y2, …, yn
find a maximum length common subsequence
(LCS) of X and Y
• E.g.:
X = A, B, C, B, D, A, B
• Subsequences of X:
– A subset of elements in the sequence taken in order
A, B, D, B, C, D, B, etc.
57
Example
X = A, B, C, B, D, A, B X = A, B, C, B, D, A, B
Y = B, D, C, A, B, A Y = B, D, C, A, B, A
• B, C, B, A and B, D, A, B are longest common
subsequences of X and Y (length = 4)
• B, C, A, however is not a LCS of X and Y
58
Brute-Force Solution
• For every subsequence of X, check whether it’s
a subsequence of Y
• There are 2m subsequences of X to check
• Each subsequence takes (n) time to check
– scan Y for first letter, from there scan for second, and
so on
• Running time: (n2m)
59
Making the choice
X = A, B, D, E
Y = Z, B, E
• Choice: include one element into the common
sequence (E) and solve the resulting
subproblem
X = A, B, D, G
Y = Z, B, D
• Choice: exclude an element from a string and
solve the resulting subproblem
60
Notations
• Given a sequence X = x1, x2, …, xm we define
the i-th prefix of X, for i = 0, 1, 2, …, m
Xi = x1, x2, …, xi
• c[i, j] = the length of a LCS of the sequences
Xi = x1, x2, …, xi and Yj = y1, y2, …, yj
61
A Recursive Solution
Case 1: xi = yj
e.g.: Xi = A, B, D, E
Yj = Z, B, E
– Append xi = yj to the LCS of Xi-1 and Yj-1
– Must find a LCS of Xi-1 and Yj-1  optimal solution to a
problem includes optimal solutions to subproblems
c[i, j] = c[i - 1, j - 1] + 1
62
A Recursive Solution
Case 2: xi  yj
e.g.: Xi = A, B, D, G
Yj = Z, B, D
– Must solve two problems
• find a LCS of Xi-1 and Yj: Xi-1 = A, B, D and Yj = Z, B, D
• find a LCS of Xi and Yj-1: Xi = A, B, D, G and Yj = Z, B
• Optimal solution to a problem includes optimal
solutions to subproblems
c[i, j] = max { c[i - 1, j], c[i, j-1] }
63
Overlapping Subproblems
• To find a LCS of X and Y
– we may need to find the LCS between X and Yn-1 and
that of Xm-1 and Y
– Both the above subproblems has the subproblem of
finding the LCS of Xm-1 and Yn-1
• Subproblems share subsubproblems
64
3. Computing the Length of the LCS
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 0 0 0 0 0
0
0
0
0
0
yj:
xm
y1 y2 yn
x1
x2
xi
j
i
0 1 2 n
m
1
2
0
first
second
65
Additional Information
0 if i,j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 0 0 0 0 0
0
0
0
0
0
yj:
D
A C F
A
B
xi
j
i
0 1 2 n
m
1
2
0
A matrix b[i, j]:
• For a subproblem [i, j] it
tells us what choice was
made to obtain the
optimal value
• If xi = yj
b[i, j] = “ ”
• Else, if
c[i - 1, j] ≥ c[i, j-1]
b[i, j] = “  ”
else
b[i, j] = “  ”
3
3 C
D
b & c:
c[i,j-1]
c[i-1,j]
66
LCS-LENGTH(X, Y, m, n)
1. for i ← 1 to m
2. do c[i, 0] ← 0
3. for j ← 0 to n
4. do c[0, j] ← 0
5. for i ← 1 to m
6. do for j ← 1 to n
7. do if xi = yj
8. then c[i, j] ← c[i - 1, j - 1] + 1
9. b[i, j ] ← “ ”
10. else if c[i - 1, j] ≥ c[i, j - 1]
11. then c[i, j] ← c[i - 1, j]
12. b[i, j] ← “↑”
13. else c[i, j] ← c[i, j - 1]
14. b[i, j] ← “←”
15.return c and b
The length of the LCS if one of the sequences
is empty is zero
Case 1: xi = yj
Case 2: xi  yj
Running time: (mn)
67
Example
X = A, B, C, B, D, A
Y = B, D, C, A, B, A
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 1 2 6
3 4 5
yj B D A
C A B
5
1
2
0
3
4
6
7
D
A
B
xi
C
B
A
B
0 0 0
0 0 0
0
0
0
0
0
0
0
0

0

0

0 1 1 1
1 1 1

1 2 2

1

1 2 2

2

2
1

1

2

2 3 3

1 2

2

2

3

3

1

2

3

2 3 4
1

2

2

3 4

4
If xi = yj
b[i, j] = “ ”
Else if
c[i - 1, j] ≥ c[i, j-1]
b[i, j] = “  ”
else
b[i, j] = “  ”
68
4. Constructing a LCS
• Start at b[m, n] and follow the arrows
• When we encounter a “ “ in b[i, j]  xi = yj is an element
of the LCS
0 1 2 6
3 4 5
yj B D A
C A B
5
1
2
0
3
4
6
7
D
A
B
xi
C
B
A
B
0 0 0
0 0 0
0
0
0
0
0
0
0
0

0

0

0 1 1 1
1 1 1

1 2 2

1

1 2 2

2

2
1

1

2

2 3 3

1 2

2

2

3

3

1

2

3

2 3 4
1

2

2

3 4

4
69
PRINT-LCS(b, X, i, j)
1. if i = 0 or j = 0
2. then return
3. if b[i, j] = “ ”
4. then PRINT-LCS(b, X, i - 1, j - 1)
5. print xi
6. elseif b[i, j] = “↑”
7. then PRINT-LCS(b, X, i - 1, j)
8. else PRINT-LCS(b, X, i, j - 1)
Initial call: PRINT-LCS(b, X, length[X], length[Y])
Running time: (m + n)
70
Improving the Code
• What can we say about how each entry c[i, j] is
computed?
– It depends only on c[i -1, j - 1], c[i - 1, j], and
c[i, j - 1]
– Eliminate table b and compute in O(1) which of the
three values was used to compute c[i, j]
– We save (mn) space from table b
– However, we do not asymptotically decrease the
auxiliary space requirements: still need table c
71
Improving the Code
• If we only need the length of the LCS
– LCS-LENGTH works only on two rows of c at a time
• The row being computed and the previous row
– We can reduce the asymptotic space requirements by
storing only these two rows

More Related Content

Similar to DynamicProgramming.ppt (20)

PPT
Dynamic1
MyAlome
 
PPT
Lp and ip programming cp 9
M S Prasad
 
PPTX
Chapter 5.pptx
Tekle12
 
PPT
P7
salamhello
 
PPT
9 - DynamicProgramming-plus latihan.ppt
KerbauBakar
 
PPT
1 Analysis of algorithmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm.ppt
yaikobdiriba1
 
PPT
5.3 dynamic programming 03
Krish_ver2
 
PDF
Daa chapter 3
B.Kirron Reddi
 
PPTX
daa-unit-3-greedy method
hodcsencet
 
PDF
Computer algorithm(Dynamic Programming).pdf
jannatulferdousmaish
 
PPT
Lecture 8 dynamic programming
Oye Tu
 
PPT
Algorithm Design and Analysis
Reetesh Gupta
 
PDF
Singlevaropt
sheetslibrary
 
PPT
Introduction
pilavare
 
PPTX
Daa:Dynamic Programing
rupali_2bonde
 
PDF
Integration techniques
Krishna Gali
 
PDF
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
saiscount01
 
PPTX
Dynamic Programming
Sahil Kumar
 
PDF
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
PDF
Chapter One.pdf
abay golla
 
Dynamic1
MyAlome
 
Lp and ip programming cp 9
M S Prasad
 
Chapter 5.pptx
Tekle12
 
9 - DynamicProgramming-plus latihan.ppt
KerbauBakar
 
1 Analysis of algorithmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm.ppt
yaikobdiriba1
 
5.3 dynamic programming 03
Krish_ver2
 
Daa chapter 3
B.Kirron Reddi
 
daa-unit-3-greedy method
hodcsencet
 
Computer algorithm(Dynamic Programming).pdf
jannatulferdousmaish
 
Lecture 8 dynamic programming
Oye Tu
 
Algorithm Design and Analysis
Reetesh Gupta
 
Singlevaropt
sheetslibrary
 
Introduction
pilavare
 
Daa:Dynamic Programing
rupali_2bonde
 
Integration techniques
Krishna Gali
 
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
saiscount01
 
Dynamic Programming
Sahil Kumar
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
Chapter One.pdf
abay golla
 

Recently uploaded (20)

PDF
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
ScyllaDB
 
PDF
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
PDF
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
PDF
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
PDF
LLM Search Readiness Audit - Dentsu x SEO Square - June 2025.pdf
Nick Samuel
 
PPTX
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
DOCX
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
PDF
Java 25 and Beyond - A Roadmap of Innovations
Ana-Maria Mihalceanu
 
PPTX
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
PPTX
01_Approach Cyber- DORA Incident Management.pptx
FinTech Belgium
 
PDF
Hyderabad MuleSoft In-Person Meetup (June 21, 2025) Slides
Ravi Tamada
 
PPTX
Paycifi - Programmable Trust_Breakfast_PPTXT
FinTech Belgium
 
PDF
FME as an Orchestration Tool with Principles From Data Gravity
Safe Software
 
PPSX
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
PDF
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
Kubernetes - Architecture & Components.pdf
geethak285
 
PDF
UiPath Agentic AI ile Akıllı Otomasyonun Yeni Çağı
UiPathCommunity
 
PDF
Python Conference Singapore - 19 Jun 2025
ninefyi
 
PDF
ArcGIS Utility Network Migration - The Hunter Water Story
Safe Software
 
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
ScyllaDB
 
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
LLM Search Readiness Audit - Dentsu x SEO Square - June 2025.pdf
Nick Samuel
 
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
Java 25 and Beyond - A Roadmap of Innovations
Ana-Maria Mihalceanu
 
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
01_Approach Cyber- DORA Incident Management.pptx
FinTech Belgium
 
Hyderabad MuleSoft In-Person Meetup (June 21, 2025) Slides
Ravi Tamada
 
Paycifi - Programmable Trust_Breakfast_PPTXT
FinTech Belgium
 
FME as an Orchestration Tool with Principles From Data Gravity
Safe Software
 
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
Kubernetes - Architecture & Components.pdf
geethak285
 
UiPath Agentic AI ile Akıllı Otomasyonun Yeni Çağı
UiPathCommunity
 
Python Conference Singapore - 19 Jun 2025
ninefyi
 
ArcGIS Utility Network Migration - The Hunter Water Story
Safe Software
 
Ad

DynamicProgramming.ppt

  • 1. Analysis of Algorithms CS 477/677 Dynamic Programming Instructor: George Bebis (Chapter 15)
  • 2. 2 Dynamic Programming • An algorithm design technique (like divide and conquer) • Divide and conquer – Partition the problem into independent subproblems – Solve the subproblems recursively – Combine the solutions to solve the original problem
  • 3. 3 Dynamic Programming • Applicable when subproblems are not independent – Subproblems share subsubproblems E.g.: Combinations: – A divide and conquer approach would repeatedly solve the common subproblems – Dynamic programming solves every subproblem just once and stores the answer in a table n k n-1 k n-1 k-1 = + n 1 n n =1 =1
  • 4. 4 Example: Combinations + = = = = = = + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 3 3 Comb (3, 1) 2 Comb (2, 1) 1 Comb (2, 2) Comb (3, 2) Comb (4,2) 2 Comb (2, 1) 1 Comb (2, 2) Comb (3, 2) 1 1 Comb (3, 3) Comb (4, 3) Comb (5, 3) 2 Comb (2, 1) 1 Comb (2, 2) Comb (3, 2) 1 1 Comb (3, 3) Comb (4, 3) 1 1 1 Comb (4, 4) Comb (5, 4) Comb (6,4) n k n-1 k n-1 k-1 = +
  • 5. 5 Dynamic Programming • Used for optimization problems – A set of choices must be made to get an optimal solution – Find a solution with the optimal value (minimum or maximum) – There may be many solutions that lead to an optimal value – Our goal: find an optimal solution
  • 6. 6 Dynamic Programming Algorithm 1. Characterize the structure of an optimal solution 2. Recursively define the value of an optimal solution 3. Compute the value of an optimal solution in a bottom-up fashion 4. Construct an optimal solution from computed information (not always necessary)
  • 7. 7 Assembly Line Scheduling • Automobile factory with two assembly lines – Each line has n stations: S1,1, . . . , S1,n and S2,1, . . . , S2,n – Corresponding stations S1, j and S2, j perform the same function but can take different amounts of time a1, j and a2, j – Entry times are: e1 and e2; exit times are: x1 and x2
  • 8. 8 Assembly Line Scheduling • After going through a station, can either: – stay on same line at no cost, or – transfer to other line: cost after Si,j is ti,j , j = 1, . . . , n - 1
  • 9. 9 Assembly Line Scheduling • Problem: what stations should be chosen from line 1 and which from line 2 in order to minimize the total time through the factory for one car?
  • 10. 10 One Solution • Brute force – Enumerate all possibilities of selecting stations – Compute how long it takes in each case and choose the best one • Solution: – There are 2n possible ways to choose stations – Infeasible when n is large!! 1 0 0 1 1 1 if choosing line 1 at step j (= n) 1 2 3 4 n 0 if choosing line 2 at step j (= 3)
  • 11. 11 1. Structure of the Optimal Solution • How do we compute the minimum time of going through a station?
  • 12. 12 1. Structure of the Optimal Solution • Let’s consider all possible ways to get from the starting point through station S1,j – We have two choices of how to get to S1, j: • Through S1, j - 1, then directly to S1, j • Through S2, j - 1, then transfer over to S1, j a1,j a1,j-1 a2,j-1 t2,j-1 S1,j S1,j-1 S2,j-1 Line 1 Line 2
  • 13. 13 1. Structure of the Optimal Solution • Suppose that the fastest way through S1, j is through S1, j – 1 – We must have taken a fastest way from entry through S1, j – 1 – If there were a faster way through S1, j - 1, we would use it instead • Similarly for S2, j – 1 a1,j a1,j-1 a2,j-1 t2,j-1 S1,j S1,j-1 S2,j-1 Optimal Substructure Line 1 Line 2
  • 14. 14 Optimal Substructure • Generalization: an optimal solution to the problem “find the fastest way through S1, j” contains within it an optimal solution to subproblems: “find the fastest way through S1, j - 1 or S2, j – 1”. • This is referred to as the optimal substructure property • We use this property to construct an optimal solution to a problem from optimal solutions to subproblems
  • 15. 15 2. A Recursive Solution • Define the value of an optimal solution in terms of the optimal solution to subproblems
  • 16. 16 2. A Recursive Solution (cont.) • Definitions: – f* : the fastest time to get through the entire factory – fi[j] : the fastest time to get from the starting point through station Si,j f* = min (f1[n] + x1, f2[n] + x2)
  • 17. 17 2. A Recursive Solution (cont.) • Base case: j = 1, i=1,2 (getting through station 1) f1[1] = e1 + a1,1 f2[1] = e2 + a2,1
  • 18. 18 2. A Recursive Solution (cont.) • General Case: j = 2, 3, …,n, and i = 1, 2 • Fastest way through S1, j is either: – the way through S1, j - 1 then directly through S1, j, or f1[j - 1] + a1,j – the way through S2, j - 1, transfer from line 2 to line 1, then through S1, j f2[j -1] + t2,j-1 + a1,j f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) a1,j a1,j-1 a2,j-1 t2,j-1 S1,j S1,j-1 S2,j-1 Line 1 Line 2
  • 19. 19 2. A Recursive Solution (cont.) e1 + a1,1 if j = 1 f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2 e2 + a2,1 if j = 1 f2[j] = min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j) if j ≥ 2
  • 20. 20 3. Computing the Optimal Solution f* = min (f1[n] + x1, f2[n] + x2) f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) f2[j] = min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j) • Solving top-down would result in exponential running time f1[j] f2[j] 1 2 3 4 5 f1(5) f2(5) f1(4) f2(4) f1(3) f2(3) 2 times 4 times f1(2) f2(2) f1(1) f2(1)
  • 21. 21 3. Computing the Optimal Solution • For j ≥ 2, each value fi[j] depends only on the values of f1[j – 1] and f2[j - 1] • Idea: compute the values of fi[j] as follows: • Bottom-up approach – First find optimal solutions to subproblems – Find an optimal solution to the problem from the subproblems f1[j] f2[j] 1 2 3 4 5 in increasing order of j
  • 22. 22 Example e1 + a1,1, if j = 1 f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2 f* = 35[1] f1[j] f2[j] 1 2 3 4 5 9 12 16[1] 18[1] 20[2] 22[2] 24[1] 25[1] 32[1] 30[2]
  • 23. 23 FASTEST-WAY(a, t, e, x, n) 1. f1[1] ← e1 + a1,1 2. f2[1] ← e2 + a2,1 3. for j ← 2 to n 4. do if f1[j - 1] + a1,j ≤ f2[j - 1] + t2, j-1 + a1, j 5. then f1[j] ← f1[j - 1] + a1, j 6. l1[j] ← 1 7. else f1[j] ← f2[j - 1] + t2, j-1 + a1, j 8. l1[j] ← 2 9. if f2[j - 1] + a2, j ≤ f1[j - 1] + t1, j-1 + a2, j 10. then f2[j] ← f2[j - 1] + a2, j 11. l2[j] ← 2 12. else f2[j] ← f1[j - 1] + t1, j-1 + a2, j 13. l2[j] ← 1 Compute initial values of f1 and f2 Compute the values of f1[j] and l1[j] Compute the values of f2[j] and l2[j] O(N)
  • 24. 24 FASTEST-WAY(a, t, e, x, n) (cont.) 14. if f1[n] + x1 ≤ f2[n] + x2 15. then f* = f1[n] + x1 16. l* = 1 17. else f* = f2[n] + x2 18. l* = 2 Compute the values of the fastest time through the entire factory
  • 25. 25 4. Construct an Optimal Solution Alg.: PRINT-STATIONS(l, n) i ← l* print “line ” i “, station ” n for j ← n downto 2 do i ←li[j] print “line ” i “, station ” j - 1 f1[j]/l1[j] f2[j]/l2[j] 1 2 3 4 5 9 12 16[1] 18[1] 20[2] 22[2] 24[1] 25[1] 32[1] 30[2] l* = 1
  • 26. 26 Matrix-Chain Multiplication Problem: given a sequence A1, A2, …, An, compute the product: A1  A2  An • Matrix compatibility: C = A  B C=A1  A2  Ai  Ai+1  An colA = rowB coli = rowi+1 rowC = rowA rowC = rowA1 colC = colB colC = colAn
  • 27. 27 MATRIX-MULTIPLY(A, B) if columns[A]  rows[B] then error “incompatible dimensions” else for i  1 to rows[A] do for j  1 to columns[B] do C[i, j] = 0 for k  1 to columns[A] do C[i, j]  C[i, j] + A[i, k] B[k, j] rows[A] rows[A] cols[B] cols[B] i j j i A B C * = k k rows[A]  cols[A]  cols[B] multiplications
  • 28. 28 Matrix-Chain Multiplication • In what order should we multiply the matrices? A1  A2  An • Parenthesize the product to get the order in which matrices are multiplied • E.g.: A1  A2  A3 = ((A1  A2)  A3) = (A1  (A2  A3)) • Which one of these orderings should we choose? – The order in which we multiply the matrices has a significant impact on the cost of evaluating the product
  • 29. 29 Example A1  A2  A3 • A1: 10 x 100 • A2: 100 x 5 • A3: 5 x 50 1. ((A1  A2)  A3): A1  A2 = 10 x 100 x 5 = 5,000 (10 x 5) ((A1  A2)  A3) = 10 x 5 x 50 = 2,500 Total: 7,500 scalar multiplications 2. (A1  (A2  A3)): A2  A3 = 100 x 5 x 50 = 25,000 (100 x 50) (A1  (A2  A3)) = 10 x 100 x 50 = 50,000 Total: 75,000 scalar multiplications one order of magnitude difference!!
  • 30. 30 Matrix-Chain Multiplication: Problem Statement • Given a chain of matrices A1, A2, …, An, where Ai has dimensions pi-1x pi, fully parenthesize the product A1  A2  An in a way that minimizes the number of scalar multiplications. A1  A2  Ai  Ai+1  An p0 x p1 p1 x p2 pi-1 x pi pi x pi+1 pn-1 x pn
  • 31. 31 What is the number of possible parenthesizations? • Exhaustively checking all possible parenthesizations is not efficient! • It can be shown that the number of parenthesizations grows as Ω(4n/n3/2) (see page 333 in your textbook)
  • 32. 32 1. The Structure of an Optimal Parenthesization • Notation: Ai…j = Ai Ai+1  Aj, i  j • Suppose that an optimal parenthesization of Ai…j splits the product between Ak and Ak+1, where i  k < j Ai…j = Ai Ai+1  Aj = Ai Ai+1  Ak Ak+1  Aj = Ai…k Ak+1…j
  • 33. 33 Optimal Substructure Ai…j = Ai…k Ak+1…j • The parenthesization of the “prefix” Ai…k must be an optimal parentesization • If there were a less costly way to parenthesize Ai…k, we could substitute that one in the parenthesization of Ai…j and produce a parenthesization with a lower cost than the optimum  contradiction! • An optimal solution to an instance of the matrix-chain multiplication contains within it optimal solutions to subproblems
  • 34. 34 2. A Recursive Solution • Subproblem: determine the minimum cost of parenthesizing Ai…j = Ai Ai+1  Aj for 1  i  j  n • Let m[i, j] = the minimum number of multiplications needed to compute Ai…j – full problem (A1..n): m[1, n] – i = j: Ai…i = Ai  m[i, i] = 0, for i = 1, 2, …, n
  • 35. 35 2. A Recursive Solution • Consider the subproblem of parenthesizing Ai…j = Ai Ai+1  Aj for 1  i  j  n = Ai…k Ak+1…j for i  k < j • Assume that the optimal parenthesization splits the product Ai Ai+1  Aj at k (i  k < j) m[i, j] = min # of multiplications to compute Ai…k # of multiplications to compute Ai…kAk…j min # of multiplications to compute Ak+1…j m[i, k] m[k+1,j] pi-1pkpj m[i, k] + m[k+1, j] + pi-1pkpj
  • 36. 36 2. A Recursive Solution (cont.) m[i, j] = m[i, k] + m[k+1, j] + pi-1pkpj • We do not know the value of k – There are j – i possible values for k: k = i, i+1, …, j-1 • Minimizing the cost of parenthesizing the product Ai Ai+1  Aj becomes: 0 if i = j m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j ik<j
  • 37. 37 3. Computing the Optimal Costs 0 if i = j m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j ik<j • Computing the optimal solution recursively takes exponential time! • How many subproblems? – Parenthesize Ai…j for 1  i  j  n – One problem for each choice of i and j  (n2) 1 1 2 3 n 2 3 n j i
  • 38. 38 3. Computing the Optimal Costs (cont.) 0 if i = j m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j ik<j • How do we fill in the tables m[1..n, 1..n]? – Determine which entries of the table are used in computing m[i, j] Ai…j = Ai…k Ak+1…j – Subproblems’ size is one less than the original size – Idea: fill in m such that it corresponds to solving problems of increasing length
  • 39. 39 3. Computing the Optimal Costs (cont.) 0 if i = j m[i, j] = min {m[i, k] + m[k+1, j] + pi-1pkpj} if i < j ik<j • Length = 1: i = j, i = 1, 2, …, n • Length = 2: j = i + 1, i = 1, 2, …, n-1 1 1 2 3 n 2 3 n Compute rows from bottom to top and from left to right m[1, n] gives the optimal solution to the problem i j
  • 40. 40 Example: min {m[i, k] + m[k+1, j] + pi-1pkpj} m[2, 2] + m[3, 5] + p1p2p5 m[2, 3] + m[4, 5] + p1p3p5 m[2, 4] + m[5, 5] + p1p4p5 1 1 2 3 6 2 3 6 i j 4 5 4 5 m[2, 5] = min • Values m[i, j] depend only on values that have been previously computed k = 2 k = 3 k = 4
  • 41. 41 Example min {m[i, k] + m[k+1, j] + pi-1pkpj} Compute A1  A2  A3 • A1: 10 x 100 (p0 x p1) • A2: 100 x 5 (p1 x p2) • A3: 5 x 50 (p2 x p3) m[i, i] = 0 for i = 1, 2, 3 m[1, 2] = m[1, 1] + m[2, 2] + p0p1p2 (A1A2) = 0 + 0 + 10 *100* 5 = 5,000 m[2, 3] = m[2, 2] + m[3, 3] + p1p2p3 (A2A3) = 0 + 0 + 100 * 5 * 50 = 25,000 m[1, 3] = min m[1, 1] + m[2, 3] + p0p1p3 = 75,000 (A1(A2A3)) m[1, 2] + m[3, 3] + p0p2p3 = 7,500 ((A1A2)A3) 0 0 0 1 1 2 2 3 3 5000 1 25000 2 7500 2
  • 43. 43 4. Construct the Optimal Solution • In a similar matrix s we keep the optimal values of k • s[i, j] = a value of k such that an optimal parenthesization of Ai..j splits the product between Ak and Ak+1 k 1 1 2 3 n 2 3 n j
  • 44. 44 4. Construct the Optimal Solution • s[1, n] is associated with the entire product A1..n – The final matrix multiplication will be split at k = s[1, n] A1..n = A1..s[1, n]  As[1, n]+1..n – For each subproduct recursively find the corresponding value of k that results in an optimal parenthesization 1 1 2 3 n 2 3 n j
  • 45. 45 4. Construct the Optimal Solution • s[i, j] = value of k such that the optimal parenthesization of Ai Ai+1  Aj splits the product between Ak and Ak+1 3 3 3 5 5 - 3 3 3 4 - 3 3 3 - 1 2 - 1 - - 1 1 2 3 6 2 3 6 i j 4 5 4 5 • s[1, n] = 3  A1..6 = A1..3 A4..6 • s[1, 3] = 1  A1..3 = A1..1 A2..3 • s[4, 6] = 5  A4..6 = A4..5 A6..6
  • 46. 46 4. Construct the Optimal Solution (cont.) 3 3 3 5 5 - 3 3 3 4 - 3 3 3 - 1 2 - 1 - - 1 1 2 3 6 2 3 6 i j 4 5 4 5 PRINT-OPT-PARENS(s, i, j) if i = j then print “A”i else print “(” PRINT-OPT-PARENS(s, i, s[i, j]) PRINT-OPT-PARENS(s, s[i, j] + 1, j) print “)”
  • 47. 47 Example: A1  A6 3 3 3 5 5 - 3 3 3 4 - 3 3 3 - 1 2 - 1 - - 1 1 2 3 6 2 3 6 i j 4 5 4 5 PRINT-OPT-PARENS(s, i, j) if i = j then print “A”i else print “(” PRINT-OPT-PARENS(s, i, s[i, j]) PRINT-OPT-PARENS(s, s[i, j] + 1, j) print “)” P-O-P(s, 1, 6) s[1, 6] = 3 i = 1, j = 6 “(“ P-O-P (s, 1, 3) s[1, 3] = 1 i = 1, j = 3 “(“ P-O-P(s, 1, 1)  “A1” P-O-P(s, 2, 3) s[2, 3] = 2 i = 2, j = 3 “(“ P-O-P (s, 2, 2)  “A2” P-O-P (s, 3, 3)  “A3” “)” “)” ( ( ( A4 A5 ) A6 ) ) A1 ( A2 A3 ) ) … ( s[1..6, 1..6]
  • 48. 48 Memoization • Top-down approach with the efficiency of typical dynamic programming approach • Maintaining an entry in a table for the solution to each subproblem – memoize the inefficient recursive algorithm • When a subproblem is first encountered its solution is computed and stored in that table • Subsequent “calls” to the subproblem simply look up that value
  • 49. 49 Memoized Matrix-Chain Alg.: MEMOIZED-MATRIX-CHAIN(p) 1. n  length[p] – 1 2. for i  1 to n 3. do for j  i to n 4. do m[i, j]   5. return LOOKUP-CHAIN(p, 1, n) Initialize the m table with large values that indicate whether the values of m[i, j] have been computed Top-down approach
  • 50. 50 Memoized Matrix-Chain Alg.: LOOKUP-CHAIN(p, i, j) 1. if m[i, j] <  2. then return m[i, j] 3. if i = j 4. then m[i, j]  0 5. else for k  i to j – 1 6. do q  LOOKUP-CHAIN(p, i, k) + LOOKUP-CHAIN(p, k+1, j) + pi-1pkpj 7. if q < m[i, j] 8. then m[i, j]  q 9. return m[i, j] Running time is O(n3)
  • 51. 51 Dynamic Progamming vs. Memoization • Advantages of dynamic programming vs. memoized algorithms – No overhead for recursion, less overhead for maintaining the table – The regular pattern of table accesses may be used to reduce time or space requirements • Advantages of memoized algorithms vs. dynamic programming – Some subproblems do not need to be solved
  • 52. 53 Elements of Dynamic Programming • Optimal Substructure – An optimal solution to a problem contains within it an optimal solution to subproblems – Optimal solution to the entire problem is build in a bottom-up manner from optimal solutions to subproblems • Overlapping Subproblems – If a recursive algorithm revisits the same subproblems over and over  the problem has overlapping subproblems
  • 53. 54 Parameters of Optimal Substructure • How many subproblems are used in an optimal solution for the original problem – Assembly line: – Matrix multiplication: • How many choices we have in determining which subproblems to use in an optimal solution – Assembly line: – Matrix multiplication: One subproblem (the line that gives best time) Two choices (line 1 or line 2) Two subproblems (subproducts Ai..k, Ak+1..j) j - i choices for k (splitting the product)
  • 54. 55 Parameters of Optimal Substructure • Intuitively, the running time of a dynamic programming algorithm depends on two factors: – Number of subproblems overall – How many choices we look at for each subproblem • Assembly line – (n) subproblems (n stations) – 2 choices for each subproblem • Matrix multiplication: – (n2) subproblems (1  i  j  n) – At most n-1 choices (n) overall (n3) overall
  • 55. 56 Longest Common Subsequence • Given two sequences X = x1, x2, …, xm Y = y1, y2, …, yn find a maximum length common subsequence (LCS) of X and Y • E.g.: X = A, B, C, B, D, A, B • Subsequences of X: – A subset of elements in the sequence taken in order A, B, D, B, C, D, B, etc.
  • 56. 57 Example X = A, B, C, B, D, A, B X = A, B, C, B, D, A, B Y = B, D, C, A, B, A Y = B, D, C, A, B, A • B, C, B, A and B, D, A, B are longest common subsequences of X and Y (length = 4) • B, C, A, however is not a LCS of X and Y
  • 57. 58 Brute-Force Solution • For every subsequence of X, check whether it’s a subsequence of Y • There are 2m subsequences of X to check • Each subsequence takes (n) time to check – scan Y for first letter, from there scan for second, and so on • Running time: (n2m)
  • 58. 59 Making the choice X = A, B, D, E Y = Z, B, E • Choice: include one element into the common sequence (E) and solve the resulting subproblem X = A, B, D, G Y = Z, B, D • Choice: exclude an element from a string and solve the resulting subproblem
  • 59. 60 Notations • Given a sequence X = x1, x2, …, xm we define the i-th prefix of X, for i = 0, 1, 2, …, m Xi = x1, x2, …, xi • c[i, j] = the length of a LCS of the sequences Xi = x1, x2, …, xi and Yj = y1, y2, …, yj
  • 60. 61 A Recursive Solution Case 1: xi = yj e.g.: Xi = A, B, D, E Yj = Z, B, E – Append xi = yj to the LCS of Xi-1 and Yj-1 – Must find a LCS of Xi-1 and Yj-1  optimal solution to a problem includes optimal solutions to subproblems c[i, j] = c[i - 1, j - 1] + 1
  • 61. 62 A Recursive Solution Case 2: xi  yj e.g.: Xi = A, B, D, G Yj = Z, B, D – Must solve two problems • find a LCS of Xi-1 and Yj: Xi-1 = A, B, D and Yj = Z, B, D • find a LCS of Xi and Yj-1: Xi = A, B, D, G and Yj = Z, B • Optimal solution to a problem includes optimal solutions to subproblems c[i, j] = max { c[i - 1, j], c[i, j-1] }
  • 62. 63 Overlapping Subproblems • To find a LCS of X and Y – we may need to find the LCS between X and Yn-1 and that of Xm-1 and Y – Both the above subproblems has the subproblem of finding the LCS of Xm-1 and Yn-1 • Subproblems share subsubproblems
  • 63. 64 3. Computing the Length of the LCS 0 if i = 0 or j = 0 c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi  yj 0 0 0 0 0 0 0 0 0 0 0 yj: xm y1 y2 yn x1 x2 xi j i 0 1 2 n m 1 2 0 first second
  • 64. 65 Additional Information 0 if i,j = 0 c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi  yj 0 0 0 0 0 0 0 0 0 0 0 yj: D A C F A B xi j i 0 1 2 n m 1 2 0 A matrix b[i, j]: • For a subproblem [i, j] it tells us what choice was made to obtain the optimal value • If xi = yj b[i, j] = “ ” • Else, if c[i - 1, j] ≥ c[i, j-1] b[i, j] = “  ” else b[i, j] = “  ” 3 3 C D b & c: c[i,j-1] c[i-1,j]
  • 65. 66 LCS-LENGTH(X, Y, m, n) 1. for i ← 1 to m 2. do c[i, 0] ← 0 3. for j ← 0 to n 4. do c[0, j] ← 0 5. for i ← 1 to m 6. do for j ← 1 to n 7. do if xi = yj 8. then c[i, j] ← c[i - 1, j - 1] + 1 9. b[i, j ] ← “ ” 10. else if c[i - 1, j] ≥ c[i, j - 1] 11. then c[i, j] ← c[i - 1, j] 12. b[i, j] ← “↑” 13. else c[i, j] ← c[i, j - 1] 14. b[i, j] ← “←” 15.return c and b The length of the LCS if one of the sequences is empty is zero Case 1: xi = yj Case 2: xi  yj Running time: (mn)
  • 66. 67 Example X = A, B, C, B, D, A Y = B, D, C, A, B, A 0 if i = 0 or j = 0 c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi  yj 0 1 2 6 3 4 5 yj B D A C A B 5 1 2 0 3 4 6 7 D A B xi C B A B 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0  0  0 1 1 1 1 1 1  1 2 2  1  1 2 2  2  2 1  1  2  2 3 3  1 2  2  2  3  3  1  2  3  2 3 4 1  2  2  3 4  4 If xi = yj b[i, j] = “ ” Else if c[i - 1, j] ≥ c[i, j-1] b[i, j] = “  ” else b[i, j] = “  ”
  • 67. 68 4. Constructing a LCS • Start at b[m, n] and follow the arrows • When we encounter a “ “ in b[i, j]  xi = yj is an element of the LCS 0 1 2 6 3 4 5 yj B D A C A B 5 1 2 0 3 4 6 7 D A B xi C B A B 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0  0  0 1 1 1 1 1 1  1 2 2  1  1 2 2  2  2 1  1  2  2 3 3  1 2  2  2  3  3  1  2  3  2 3 4 1  2  2  3 4  4
  • 68. 69 PRINT-LCS(b, X, i, j) 1. if i = 0 or j = 0 2. then return 3. if b[i, j] = “ ” 4. then PRINT-LCS(b, X, i - 1, j - 1) 5. print xi 6. elseif b[i, j] = “↑” 7. then PRINT-LCS(b, X, i - 1, j) 8. else PRINT-LCS(b, X, i, j - 1) Initial call: PRINT-LCS(b, X, length[X], length[Y]) Running time: (m + n)
  • 69. 70 Improving the Code • What can we say about how each entry c[i, j] is computed? – It depends only on c[i -1, j - 1], c[i - 1, j], and c[i, j - 1] – Eliminate table b and compute in O(1) which of the three values was used to compute c[i, j] – We save (mn) space from table b – However, we do not asymptotically decrease the auxiliary space requirements: still need table c
  • 70. 71 Improving the Code • If we only need the length of the LCS – LCS-LENGTH works only on two rows of c at a time • The row being computed and the previous row – We can reduce the asymptotic space requirements by storing only these two rows