SlideShare a Scribd company logo
Combinatorial Optimization
CSE 301
Lecture 1
Dynamic Programming
2
Dynamic Programming
• An algorithm design technique (like divide and
conquer)
• Divide and conquer
– Partition the problem into independent subproblems
– Solve the subproblems recursively
– Combine the solutions to solve the original problem
3
DP - Two key ingredients
• Two key ingredients for an optimization problem
to be suitable for a dynamic-programming
solution:
Each substructure is
optimal.
(Principle of optimality)
1. optimal substructures 2. overlapping subproblems
Subproblems are dependent.
(otherwise, a divide-and-
conquer approach is the
choice.)
4
Three basic components
• The development of a dynamic-programming
algorithm has three basic components:
– The recurrence relation (for defining the value of an
optimal solution);
– The tabular computation (for computing the value of
an optimal solution);
– The traceback (for delivering an optimal solution).
5
Fibonacci numbers
.
for
2
1
1
1
0
0






i>1
i
F
i
F
i
F
F
F
The Fibonacci numbers are defined by the
following recurrence:
6
How to compute F10 ?
F10
F9
F8
F8
F7
F7
F6
……
7
Dynamic Programming
• Applicable when subproblems are not independent
– Subproblems share subsubproblems
E.g.: Fibonacci numbers:
• Recurrence: F(n) = F(n-1) + F(n-2)
• Boundary conditions: F(1) = 0, F(2) = 1
• Compute: F(5) = 3, F(3) = 1, F(4) = 2
– A divide and conquer approach would repeatedly solve the
common subproblems
– Dynamic programming solves every subproblem just once and
stores the answer in a table
8
Tabular computation
• The tabular computation can avoid
recompuation.
F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10
0 1 1 2 3 5 8 13 21 34 55
Result
9
Dynamic Programming Algorithm
1. Characterize the structure of an optimal
solution
2. Recursively define the value of an optimal
solution
3. Compute the value of an optimal solution in a
bottom-up fashion
4. Construct an optimal solution from computed
information
10
Longest increasing subsequence(LIS)
• The longest increasing subsequence is to find
a longest increasing subsequence of a given
sequence of distinct integers a1a2…an .
e.g. 9 2 5 3 7 11 8 10 13 6
2 3 7
5 7 10 13
9 7 11
3 5 11 13
are increasing subsequences.
are not increasing subsequences.
We want to find a longest one.
11
A naive approach for LIS
• Let L[i] be the length of a longest increasing
subsequence ending at position i.
L[i] = 1 + max j = 0..i-1{L[j] | aj < ai}
(use a dummy a0 = minimum, and L[0]=0)
Index 0 1 2 3 4 5 6 7 8 9 10
Input 0 9 2 5 3 7 11 8 10 13 6
Length 0
Prev -1
Path 1
1
0
1
1
0
1
2
2
1
2
2
1
3
4
2
4
5
2
4
5
2
5
7
2
6
8
2
3
4
2
The subsequence 2, 3, 7, 8, 10, 13 is a
longest increasing subsequence.
This method runs in O(n2
) time.
12
An O(n log n) method for LIS
• Define BestEnd[k] to be the smallest number
of an increasing subsequence of length k.
9 2 5 3 7 11 8 10 13 6
9 2 2
5
2
3
2
3
7
2
3
7
11
2
3
7
8
2
3
7
8
10
2
3
7
8
10
13
BestEnd[1]
BestEnd[2]
BestEnd[3]
BestEnd[4]
BestEnd[5]
BestEnd[6]
13
An O(n log n) method for LIS
• Define BestEnd[k] to be the smallest number
of an increasing subsequence of length k.
9 2 5 3 7 11 8 10 13 6
9 2 2
5
2
3
2
3
7
2
3
7
11
2
3
7
8
2
3
7
8
10
2
3
7
8
10
13
2
3
6
8
10
13
BestEnd[1]
BestEnd[2]
BestEnd[3]
BestEnd[4]
BestEnd[5]
BestEnd[6]
For each position, we perform
a binary search to update
BestEnd. Therefore, the
running time is O(n log n).
Sum of Subset Problem
• Problem:
– Suppose you are given N positive integer numbers
A[1…N] and it is required to produce another number
K using a subset of A[1..N] numbers. How can it be
done using Dynamic programming approach?
• Example:
N = 6, A[1..N] = {2, 5, 8, 12, 6, 14}, K = 19
Result: 2 + 5 + 12 = 19
14
Coin Change Problem
• Suppose you are given n types of coin - C1, C2,
… , Cn coin, and another number K.
• Is it possible to make K using above types of
coin?
– Number of each coin is infinite
– Number of each coin is finite
• Find minimum number of coin that is required to
make K?
– Number of each coin is infinite
– Number of each coin is finite
16
Maximum-sum interval
• Given a sequence of real numbers a1a2…an ,
find a consecutive subsequence with the
maximum sum.
9 –3 1 7 –15 2 3 –4 2 –7 6 –2 8 4 -9
For each position, we can compute the maximum-sum
interval starting at that position in O(n) time. Therefore, a
naive algorithm runs in O(n2
) time.
Try Yourself
17
The Knapsack Problem
• The 0-1 knapsack problem
– A thief robbing a store finds n items: the i-th item is
worth vi dollars and weights wi pounds (vi, wi integers)
– The thief can only carry W pounds in his knapsack
– Items must be taken entirely or left behind
– Which items should the thief take to maximize the
value of his load?
• The fractional knapsack problem
– Similar to above
– The thief can take fractions of items
18
The 0-1 Knapsack Problem
• Thief has a knapsack of capacity W
• There are n items: for i-th item value vi and
weight wi
• Goal:
– find xi such that for all xi = {0, 1}, i = 1, 2, .., n
 wixi  W and
 xivi is maximum
19
50
0-1 Knapsack - Greedy Strategy
• E.g.:
10
20
30
50
Item 1
Item 2
Item 3
$60 $100 $120
10
20
$60
$100
+
$160
50
20 $100
$120
+
$220
30
$6/pound $5/pound $4/pound
• None of the solutions involving the greedy
choice (item 1) leads to an optimal solution
– The greedy choice property does not hold
20
0-1 Knapsack - Dynamic Programming
• P(i, w) – the maximum profit that can be
obtained from items 1 to i, if the
knapsack has size w
• Case 1: thief takes item i
P(i, w) =
• Case 2: thief does not take item i
P(i, w) =
vi + P(i - 1, w-wi)
P(i - 1, w)
21
0-1 Knapsack - Dynamic Programming
0 0 0 0 0 0 0 0 0 0 0
0
0
0
0
0
0
0:
n
1 w - wi
W
i-1
0
first
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
Item i was taken Item i was not taken
i
w
second
22
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
0 0 0 0 0 0
0
0
0
0
Item Weight Value
1 2 12
2 1 10
3 3 20
4 2 15
0 1 2 3 4 5
1
2
3
4
W = 5
0
12 12 12 12
10 12 22 22 22
10 12 22 30 32
10 15 25 30 37
P(1, 1) =
P(1, 2) =
P(1, 3) =
P(1, 4) =
P(1, 5) =
P(2, 1)=
P(2, 2)=
P(2, 3)=
P(2, 4)=
P(2, 5)=
P(3, 1)=
P(3, 2)=
P(3, 3)=
P(3, 4)=
P(3, 5)=
P(4, 1)=
P(4, 2)=
P(4, 3)=
P(4, 4)=
P(4, 5)=
max{12+0, 0} = 12
max{12+0, 0} = 12
max{12+0, 0} = 12
max{12+0, 0} = 12
max{10+0, 0} = 10
max{10+0, 12} = 12
max{10+12, 12} = 22
max{10+12, 12} = 22
max{10+12, 12} = 22
P(2,1) = 10
P(2,2) = 12
max{20+0, 22}=22
max{20+10,22}=30
max{20+12,22}=32
P(3,1) = 10
max{15+0, 12} = 15
max{15+10, 22}=25
max{15+12, 30}=30
max{15+22, 32}=37
0
P(0, 1) = 0
Example:
23
Reconstructing the Optimal Solution
0 0 0 0 0 0
0
0
0
0
0 1 2 3 4 5
1
2
3
4
0
12 12 12 12
10 12 22 22 22
10 12 22 30 32
10 15 25 30 37
0
• Start at P(n, W)
• When you go left-up  item i has been taken
• When you go straight up  item i has not been
taken
• Item 4
• Item 2
• Item 1
24
Overlapping Subproblems
0 0 0 0 0 0 0 0 0 0 0
0
0
0
0
0
0
0:
n
1 W
i-1
0
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
i
w
E.g.: all the subproblems shown in grey may
depend on P(i-1, w)
25
Longest Common Subsequence (LCS)
Application: comparison of two DNA strings
Ex: X= {A B C B D A B }, Y= {B D C A B A}
Longest Common Subsequence:
X = A B C B D A B
Y = B D C A B A
Brute force algorithm would compare each
subsequence of X with the symbols in Y
26
Longest Common Subsequence
• Given two sequences
X = x1, x2, …, xm
Y = y1, y2, …, yn
find a maximum length common subsequence
(LCS) of X and Y
• E.g.:
X = A, B, C, B, D, A, B
• Subsequences of X:
– A subset of elements in the sequence taken in order
A, B, D, B, C, D, B, etc.
27
Example
X = A, B, C, B, D, A, B X = A, B, C, B, D, A, B
Y = B, D, C, A, B, A Y = B, D, C, A, B, A
 B, C, B, A and B, D, A, B are longest common
subsequences of X and Y (length = 4)
 B, C, A, however is not a LCS of X and Y
28
Brute-Force Solution
• For every subsequence of X, check whether it’s
a subsequence of Y
• There are 2m
subsequences of X to check
• Each subsequence takes (n) time to check
– scan Y for first letter, from there scan for second, and
so on
• Running time: (n2m
)
29
LCS Algorithm
• First we’ll find the length of LCS. Later we’ll modify
the algorithm to find LCS itself.
• Define Xi, Yj to be the prefixes of X and Y of length i
and j respectively
• Define c[i,j] to be the length of LCS of Xi and Yj
• Then the length of LCS of X and Y will be c[m,n]










otherwise
])
,
1
[
],
1
,
[
max(
],
[
]
[
if
1
]
1
,
1
[
]
,
[
j
i
c
j
i
c
j
y
i
x
j
i
c
j
i
c
30
LCS recursive solution
• We start with i = j = 0 (empty substrings of x and y)
• Since X0 and Y0 are empty strings, their LCS is
always empty (i.e. c[0,0] = 0)
• LCS of empty string and any other string is empty,
so for every i and j: c[0, j] = c[i,0] = 0










otherwise
])
,
1
[
],
1
,
[
max(
],
[
]
[
if
1
]
1
,
1
[
]
,
[
j
i
c
j
i
c
j
y
i
x
j
i
c
j
i
c
31
LCS recursive solution
• When we calculate c[i,j], we consider two cases:
• First case: x[i]=y[j]:
– one more symbol in strings X and Y matches, so the length
of LCS Xi and Yj equals to the length of LCS of smaller
strings Xi-1 and Yi-1 , plus 1










otherwise
])
,
1
[
],
1
,
[
max(
],
[
]
[
if
1
]
1
,
1
[
]
,
[
j
i
c
j
i
c
j
y
i
x
j
i
c
j
i
c
32
LCS recursive solution
• Second case: x[i] != y[j]
– As symbols don’t match, our solution is not improved, and
the length of LCS(Xi , Yj) is the same as before (i.e.
maximum of LCS(Xi, Yj-1) and LCS(Xi-1,Yj)










otherwise
])
,
1
[
],
1
,
[
max(
],
[
]
[
if
1
]
1
,
1
[
]
,
[
j
i
c
j
i
c
j
y
i
x
j
i
c
j
i
c
Why not just take the length of LCS(Xi-1, Yj-1) ?
33
3. Computing the Length of the LCS
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 0 0 0 0 0
0
0
0
0
0
yj:
xm
y1 y2 yn
x1
x2
xi
j
i
0 1 2 n
m
1
2
0
first
second
34
Additional Information
0 if i,j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 0 0 0 0 0
0
0
0
0
0
yj:
D
A C F
A
B
xi
j
i
0 1 2 n
m
1
2
0
A matrix b[i, j]:
• For a subproblem [i, j] it
tells us what choice was
made to obtain the
optimal value
• If xi = yj
b[i, j] = “ ”
• Else, if
c[i - 1, j] ≥ c[i, j-1]
b[i, j] = “  ”
else
b[i, j] = “  ”
3
3 C
D
b & c:
c[i,j-1]
c[i-1,j]
35
LCS-LENGTH(X, Y, m, n)
1. for i 1
← to m
2. do c[i, 0] 0
←
3. for j 0
← to n
4. do c[0, j] 0
←
5. for i 1
← to m
6. do for j 1
← to n
7. do if xi = yj
8. then c[i, j] c[i - 1, j - 1] + 1
←
9. b[i, j ] “ ”
←
10. else if c[i - 1, j] ≥ c[i, j - 1]
11. then c[i, j] c[i - 1, j]
←
12. b[i, j] “ ”
← ↑
13. else c[i, j] c[i, j - 1]
←
14. b[i, j] “ ”
← ←
15.return c and b
The length of the LCS if one of the sequences
is empty is zero
Case 1: xi = yj
Case 2: xi  yj
Running time: (mn)
36
Example
X = A, B, C, B, D, A
Y = B, D, C, A, B, A
0 if i = 0 or
j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
0 1 2 6
3 4 5
yj B D A
C A B
5
1
2
0
3
4
6
7
D
A
B
xi
C
B
A
B
0 0 0
0 0 0
0
0
0
0
0
0
0
0

0

0

0 1 1 1
1 1 1

1 2 2

1

1 2 2

2

2
1

1

2

2 3 3

1 2

2

2

3

3

1

2

3

2 3 4
1

2

2

3 4

4
If xi = yj
b[i, j] = “ ”
Else if c[i -
1, j] ≥ c[i, j-1]
b[i, j] = “  ”
else
b[i, j] = “  ”
37
4. Constructing a LCS
• Start at b[m, n] and follow the arrows
• When we encounter a “ “ in b[i, j]  xi = yj is an element
of the LCS 0 1 2 6
3 4 5
yj B D A
C A B
5
1
2
0
3
4
6
7
D
A
B
xi
C
B
A
B
0 0 0
0 0 0
0
0
0
0
0
0
0
0

0

0

0 1 1 1
1 1 1

1 2 2

1

1 2 2

2

2
1

1

2

2 3 3

1 2

2

2

3

3

1

2

3

2 3 4
1

2

2

3 4

4
38
PRINT-LCS(b, X, i, j)
1. if i = 0 or j = 0
2. then return
3. if b[i, j] = “ ”
4. then PRINT-LCS(b, X, i - 1, j - 1)
5. print xi
6. elseif b[i, j] = “ ”
↑
7. then PRINT-LCS(b, X, i - 1, j)
8. else PRINT-LCS(b, X, i, j - 1)
Initial call: PRINT-LCS(b, X, length[X], length[Y])
Running time: (m + n)
39
Improving the Code
• If we only need the length of the LCS
– LCS-LENGTH works only on two rows of c at a time
• The row being computed and the previous row
– We can reduce the asymptotic space requirements by
storing only these two rows
40
LCS Algorithm Running Time
• LCS algorithm calculates the values of each entry of
the array c[m,n]
• So what is the running time?
O(m*n)
since each c[i,j] is calculated in constant
time, and there are m*n elements in the
array
Rock Climbing Problem
• A rock climber wants to get from
the bottom of a rock to the top
by the safest possible path.
• At every step, he reaches for
handholds above him; some
holds are safer than other.
• From every place, he can only
reach a few nearest handholds.
Rock climbing (cont)
At every step our climber can reach exactly three
handholds: above, above and to the right and
above and to the left.
Suppose we have a
wall instead of the rock.
There is a table of “danger ratings” provided. The
“Danger” of a path is the sum of danger ratings of
all handholds on the path.
5 3
4
2
Rock Climbing (cont)
•We represent the wall as a
table.
•Every cell of the table contains
the danger rating of the
corresponding block.
2 8 9 5 8
4 4 6 2 3
5 7 5 6 1
3 2 5 4 8
The obvious greedy algorithm does not give an
optimal solution.
2
2
5
5
4
4
2
2
The rating of this path is 13.
The rating of an optimal path is 12.
4
4
1
1
2
2
5
5
However, we can solve this problem by a
dynamic programming strategy in polynomial
time.
Idea: once we know the rating of a path to
every handhold on a layer, we can easily
compute the ratings of the paths to the
holds on the next layer.
For the top layer, that gives us an
answer to the problem itself.
For every handhold, there is only one
“path” rating. Once we have reached a
hold, we don’t need to know how we got
there to move to the next level.
This is called an “optimal substructure” property.
Once we know optimal solutions to
subproblems, we can compute an optimal
solution to the problem itself.
Recursive solution:
To find the best way to get to stone j in row
i, check the cost of getting to the stones
• (i-1,j-1),
• (i-1,j) and
• (i-1,j+1), and take the cheapest.
Problem: each recursion level makes three
calls for itself, making a total of 3n
calls –
too much!
Solution - memorization
We query the value of A(i,j) over and over
again.
Instead of computing it each time, we can
compute it once, and remember the value.
A simple recurrence allows us to compute
A(i,j) from values below.
Dynamic programming
• Step 1: Describe an array of values you want
to compute.
• Step 2: Give a recurrence for computing later
values from earlier (bottom-up).
• Step 3: Give a high-level program.
• Step 4: Show how to use values in the array
to compute an optimal solution.
Rock climbing: step 1.
• Step 1: Describe an array of values you want
to compute.
• For 1  i  n and 1  j  m, define A(i,j) to
be the cumulative rating of the least dangerous
path from the bottom to the hold (i,j).
• The rating of the best path to the top will be the
minimal value in the last row of the array.
Rock climbing: step 2.
• Step 2: Give a recurrence for computing later values from earlier
(bottom-up).
• Let C(i,j) be the rating of the hold (i,j). There are three cases for
A(i,j):
• Left (j=1): C(i,j)+min{A(i-1,j),A(i-1,j+1)}
• Right (j=m): C(i,j)+min{A(i-1,j-1),A(i-1,j)}
• Middle: C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)}
• For the first row (i=1), A(i,j)=C(i,j).
Rock climbing: simpler step 2
• Add initialization row: A(0,j)=0. No danger to stand on
the ground.
• Add two initialization columns:
A(i,0)=A(i,m+1)=. It is infinitely dangerous to try to
hold on to the air where the wall ends.
• Now the recurrence becomes, for every i,j:
A(i,j) = C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)}
Rock climbing: example
C(i,j): A(i,j):
ij 0 1 2 3 4 5 6
0
1
2
3
4
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  
2  
3  
4  
Initialization: A(i,0)=A(i,m+1)=, A(0,j)=0
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  
3  
4  
The values in the first row are the same as C(i,j).
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  
2  
3  
4  
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
A(2,1)=5+min{,3,2}=7.
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 
3  
4  
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
A(2,1)=5+min{,3,2}=7. A(2,2)=7+min{3,2,5}=9
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 
3  
4  
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
A(2,1)=5+min{,3,2}=7. A(2,2)=7+min{3,2,5}=9
A(2,3)=5+min{2,5,4}=7.
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 
3  
4  
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
The best cumulative rating on the second row is 5.
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  
4  
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
The best cumulative rating on the third row is 7.
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
The best cumulative rating on the last row is 12.
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
Rock climbing: example
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
The best cumulative rating on the last row is 12.
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
So the rating of the best path to the top
is 12.
Rock climbing example: step 4
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
To find the actual path we need to retrace backwards
the decisions made during the calculation of A(i,j).
Rock climbing example: step 4
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
The last hold was (4,4).
To find the actual path we need to retrace backwards
the decisions made during the calculation of A(i,j).
Rock climbing example: step 4
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
The hold before the last
was (3,4), since
min{13,7,8} was 7.
To find the actual path we need to retrace backwards
the decisions made during the calculation of A(i,j).
Rock climbing example: step 4
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
To find the actual path we need to retrace backwards
the decisions made during the calculation of A(i,j).
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
The hold before that
was (2,5), since
min{7,10,5} was 5.
Rock climbing example: step 4
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
To find the actual path we need to retrace backwards
the decisions made during the calculation of A(i,j).
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
Finally, the first hold
was (1,4), since
min{5,4,8} was 4.
Rock climbing example: step 4
3 2 5 4 8
5 7 5 6 1
4 4 6 2 3
2 8 9 5 8
C(i,j): A(i,j):
We are done!
ij 0 1 2 3 4 5 6
0  0 0 0 0 0 
1  3 2 5 4 8 
2  7 9 7 10 5 
3  11 11 13 7 8 
4  13 19 16 12 15 
Printing out the solution recursively
PrintBest(A,i,j) // Printing the best path ending at (i,j)
if (i==0) OR (j=0) OR (j=m+1)
return;
if (A[i-1,j-1]<=A[i-1,j]) AND (A[i-1,j-1]<=A[i-1,j+1])
PrintBest(A,i-1,j-1);
elseif (A[i-1,j]<=A[i-1,j-1]) AND (A[i-1,j]<=A[i-1,j+1])
PrintBest(A,i-1,j);
elseif (A[i-1,j+1]<=A[i-1,j-1]) AND (A[i-1,j+1]<=A[i-1,j])
PrintBest(A,i-1,j+1);
printf(i,j)
Ad

Recommended

8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx
zahidulhasan32
 
Chapter 5.pptx
Chapter 5.pptx
Tekle12
 
Dynamic programming - fundamentals review
Dynamic programming - fundamentals review
ElifTech
 
Lop1
Lop1
devendragiitk
 
Design and Analysis of Algorithm-Lecture.pptx
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
unit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
Elak3 need of greedy for design and analysis of algorithms.ppt
Elak3 need of greedy for design and analysis of algorithms.ppt
Elakkiya Rajasekar
 
machine learning.pptx
machine learning.pptx
AbdusSadik
 
chapter1.pdf ......................................
chapter1.pdf ......................................
nourhandardeer3
 
Longest Common Subsequence
Longest Common Subsequence
Swati Swati
 
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
yashodamb
 
5.3 dynamic programming 03
5.3 dynamic programming 03
Krish_ver2
 
Dynamic1
Dynamic1
MyAlome
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Gautam873893
 
Algorithms Exam Help
Algorithms Exam Help
Programming Exam Help
 
Supporting Vector Machine
Supporting Vector Machine
Sumit Singh
 
programminghomeworkhelp.com_Advanced Algorithms Homework Help.pptx
programminghomeworkhelp.com_Advanced Algorithms Homework Help.pptx
Programming Homework Help
 
Bitwise
Bitwise
Axel Ryo
 
Dynamic Programming
Dynamic Programming
Sahil Kumar
 
super vector machines algorithms using deep
super vector machines algorithms using deep
KNaveenKumarECE
 
Sienna 10 dynamic
Sienna 10 dynamic
chidabdu
 
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
Jay Patel
 
Unit 3
Unit 3
GunasundariSelvaraj
 
Unit 3
Unit 3
Gunasundari Selvaraj
 
greedy algorithm Fractional Knapsack
greedy algorithm Fractional Knapsack
Md. Musfiqur Rahman Foysal
 
Dynamic Programming for 4th sem cse students
Dynamic Programming for 4th sem cse students
DeepakGowda357858
 
Dynamic Programming.pptx
Dynamic Programming.pptx
Thanga Ramya S
 
SQL-Demystified-A-Beginners-Guide-to-Database-Mastery.pptx
SQL-Demystified-A-Beginners-Guide-to-Database-Mastery.pptx
bhavaniteacher99
 
Data Warehousing and Analytics IFI Techsolutions .pptx
Data Warehousing and Analytics IFI Techsolutions .pptx
IFI Techsolutions
 

More Related Content

Similar to Learn about dynamic programming and how to design algorith (20)

chapter1.pdf ......................................
chapter1.pdf ......................................
nourhandardeer3
 
Longest Common Subsequence
Longest Common Subsequence
Swati Swati
 
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
yashodamb
 
5.3 dynamic programming 03
5.3 dynamic programming 03
Krish_ver2
 
Dynamic1
Dynamic1
MyAlome
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Gautam873893
 
Algorithms Exam Help
Algorithms Exam Help
Programming Exam Help
 
Supporting Vector Machine
Supporting Vector Machine
Sumit Singh
 
programminghomeworkhelp.com_Advanced Algorithms Homework Help.pptx
programminghomeworkhelp.com_Advanced Algorithms Homework Help.pptx
Programming Homework Help
 
Bitwise
Bitwise
Axel Ryo
 
Dynamic Programming
Dynamic Programming
Sahil Kumar
 
super vector machines algorithms using deep
super vector machines algorithms using deep
KNaveenKumarECE
 
Sienna 10 dynamic
Sienna 10 dynamic
chidabdu
 
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
Jay Patel
 
Unit 3
Unit 3
GunasundariSelvaraj
 
Unit 3
Unit 3
Gunasundari Selvaraj
 
greedy algorithm Fractional Knapsack
greedy algorithm Fractional Knapsack
Md. Musfiqur Rahman Foysal
 
Dynamic Programming for 4th sem cse students
Dynamic Programming for 4th sem cse students
DeepakGowda357858
 
Dynamic Programming.pptx
Dynamic Programming.pptx
Thanga Ramya S
 
chapter1.pdf ......................................
chapter1.pdf ......................................
nourhandardeer3
 
Longest Common Subsequence
Longest Common Subsequence
Swati Swati
 
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
yashodamb
 
5.3 dynamic programming 03
5.3 dynamic programming 03
Krish_ver2
 
Dynamic1
Dynamic1
MyAlome
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Gautam873893
 
Supporting Vector Machine
Supporting Vector Machine
Sumit Singh
 
programminghomeworkhelp.com_Advanced Algorithms Homework Help.pptx
programminghomeworkhelp.com_Advanced Algorithms Homework Help.pptx
Programming Homework Help
 
Dynamic Programming
Dynamic Programming
Sahil Kumar
 
super vector machines algorithms using deep
super vector machines algorithms using deep
KNaveenKumarECE
 
Sienna 10 dynamic
Sienna 10 dynamic
chidabdu
 
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
Jay Patel
 
Dynamic Programming for 4th sem cse students
Dynamic Programming for 4th sem cse students
DeepakGowda357858
 
Dynamic Programming.pptx
Dynamic Programming.pptx
Thanga Ramya S
 

Recently uploaded (20)

SQL-Demystified-A-Beginners-Guide-to-Database-Mastery.pptx
SQL-Demystified-A-Beginners-Guide-to-Database-Mastery.pptx
bhavaniteacher99
 
Data Warehousing and Analytics IFI Techsolutions .pptx
Data Warehousing and Analytics IFI Techsolutions .pptx
IFI Techsolutions
 
KLIP2Data voor de herinrichting van R4 West en Oost
KLIP2Data voor de herinrichting van R4 West en Oost
jacoba18
 
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
vemulavenu484
 
SUNSSE Engineering Introduction 2021.pdf
SUNSSE Engineering Introduction 2021.pdf
Ongkino
 
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
Ameya Patekar
 
Pause Travail 22 Hostiou Girard 12 juin 2025.pdf
Pause Travail 22 Hostiou Girard 12 juin 2025.pdf
Institut de l'Elevage - Idele
 
Grade 10 selection and placement (1).pptx
Grade 10 selection and placement (1).pptx
FIDELISMUSEMBI
 
Section Three - Project colemanite production China
Section Three - Project colemanite production China
VavaniaM
 
Module 1Integrity_and_Ethics_PPT-2025.pptx
Module 1Integrity_and_Ethics_PPT-2025.pptx
Karikalcholan Mayavan
 
Residential Zone 4 for industrial village
Residential Zone 4 for industrial village
MdYasinArafat13
 
Attendance Presentation Project Excel.pptx
Attendance Presentation Project Excel.pptx
s2025266191
 
Grote OSM datasets zonder kopzorgen bij Reijers
Grote OSM datasets zonder kopzorgen bij Reijers
jacoba18
 
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
Ameya Patekar
 
Data-Driven-Operational--Excellence.pptx
Data-Driven-Operational--Excellence.pptx
NiwanthaThilanjanaGa
 
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays
 
FME Beyond Data Processing: Creating a Dartboard Accuracy App
FME Beyond Data Processing: Creating a Dartboard Accuracy App
jacoba18
 
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays
 
Untitled presentation xcvxcvxcvxcvx.pptx
Untitled presentation xcvxcvxcvxcvx.pptx
jonathan4241
 
最新版西班牙莱里达大学毕业证(UdL毕业证书)原版定制
最新版西班牙莱里达大学毕业证(UdL毕业证书)原版定制
Taqyea
 
SQL-Demystified-A-Beginners-Guide-to-Database-Mastery.pptx
SQL-Demystified-A-Beginners-Guide-to-Database-Mastery.pptx
bhavaniteacher99
 
Data Warehousing and Analytics IFI Techsolutions .pptx
Data Warehousing and Analytics IFI Techsolutions .pptx
IFI Techsolutions
 
KLIP2Data voor de herinrichting van R4 West en Oost
KLIP2Data voor de herinrichting van R4 West en Oost
jacoba18
 
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
vemulavenu484
 
SUNSSE Engineering Introduction 2021.pdf
SUNSSE Engineering Introduction 2021.pdf
Ongkino
 
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
Ameya Patekar
 
Grade 10 selection and placement (1).pptx
Grade 10 selection and placement (1).pptx
FIDELISMUSEMBI
 
Section Three - Project colemanite production China
Section Three - Project colemanite production China
VavaniaM
 
Module 1Integrity_and_Ethics_PPT-2025.pptx
Module 1Integrity_and_Ethics_PPT-2025.pptx
Karikalcholan Mayavan
 
Residential Zone 4 for industrial village
Residential Zone 4 for industrial village
MdYasinArafat13
 
Attendance Presentation Project Excel.pptx
Attendance Presentation Project Excel.pptx
s2025266191
 
Grote OSM datasets zonder kopzorgen bij Reijers
Grote OSM datasets zonder kopzorgen bij Reijers
jacoba18
 
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
Ameya Patekar
 
Data-Driven-Operational--Excellence.pptx
Data-Driven-Operational--Excellence.pptx
NiwanthaThilanjanaGa
 
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays
 
FME Beyond Data Processing: Creating a Dartboard Accuracy App
FME Beyond Data Processing: Creating a Dartboard Accuracy App
jacoba18
 
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays
 
Untitled presentation xcvxcvxcvxcvx.pptx
Untitled presentation xcvxcvxcvxcvx.pptx
jonathan4241
 
最新版西班牙莱里达大学毕业证(UdL毕业证书)原版定制
最新版西班牙莱里达大学毕业证(UdL毕业证书)原版定制
Taqyea
 
Ad

Learn about dynamic programming and how to design algorith

  • 2. 2 Dynamic Programming • An algorithm design technique (like divide and conquer) • Divide and conquer – Partition the problem into independent subproblems – Solve the subproblems recursively – Combine the solutions to solve the original problem
  • 3. 3 DP - Two key ingredients • Two key ingredients for an optimization problem to be suitable for a dynamic-programming solution: Each substructure is optimal. (Principle of optimality) 1. optimal substructures 2. overlapping subproblems Subproblems are dependent. (otherwise, a divide-and- conquer approach is the choice.)
  • 4. 4 Three basic components • The development of a dynamic-programming algorithm has three basic components: – The recurrence relation (for defining the value of an optimal solution); – The tabular computation (for computing the value of an optimal solution); – The traceback (for delivering an optimal solution).
  • 6. 6 How to compute F10 ? F10 F9 F8 F8 F7 F7 F6 ……
  • 7. 7 Dynamic Programming • Applicable when subproblems are not independent – Subproblems share subsubproblems E.g.: Fibonacci numbers: • Recurrence: F(n) = F(n-1) + F(n-2) • Boundary conditions: F(1) = 0, F(2) = 1 • Compute: F(5) = 3, F(3) = 1, F(4) = 2 – A divide and conquer approach would repeatedly solve the common subproblems – Dynamic programming solves every subproblem just once and stores the answer in a table
  • 8. 8 Tabular computation • The tabular computation can avoid recompuation. F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 0 1 1 2 3 5 8 13 21 34 55 Result
  • 9. 9 Dynamic Programming Algorithm 1. Characterize the structure of an optimal solution 2. Recursively define the value of an optimal solution 3. Compute the value of an optimal solution in a bottom-up fashion 4. Construct an optimal solution from computed information
  • 10. 10 Longest increasing subsequence(LIS) • The longest increasing subsequence is to find a longest increasing subsequence of a given sequence of distinct integers a1a2…an . e.g. 9 2 5 3 7 11 8 10 13 6 2 3 7 5 7 10 13 9 7 11 3 5 11 13 are increasing subsequences. are not increasing subsequences. We want to find a longest one.
  • 11. 11 A naive approach for LIS • Let L[i] be the length of a longest increasing subsequence ending at position i. L[i] = 1 + max j = 0..i-1{L[j] | aj < ai} (use a dummy a0 = minimum, and L[0]=0) Index 0 1 2 3 4 5 6 7 8 9 10 Input 0 9 2 5 3 7 11 8 10 13 6 Length 0 Prev -1 Path 1 1 0 1 1 0 1 2 2 1 2 2 1 3 4 2 4 5 2 4 5 2 5 7 2 6 8 2 3 4 2 The subsequence 2, 3, 7, 8, 10, 13 is a longest increasing subsequence. This method runs in O(n2 ) time.
  • 12. 12 An O(n log n) method for LIS • Define BestEnd[k] to be the smallest number of an increasing subsequence of length k. 9 2 5 3 7 11 8 10 13 6 9 2 2 5 2 3 2 3 7 2 3 7 11 2 3 7 8 2 3 7 8 10 2 3 7 8 10 13 BestEnd[1] BestEnd[2] BestEnd[3] BestEnd[4] BestEnd[5] BestEnd[6]
  • 13. 13 An O(n log n) method for LIS • Define BestEnd[k] to be the smallest number of an increasing subsequence of length k. 9 2 5 3 7 11 8 10 13 6 9 2 2 5 2 3 2 3 7 2 3 7 11 2 3 7 8 2 3 7 8 10 2 3 7 8 10 13 2 3 6 8 10 13 BestEnd[1] BestEnd[2] BestEnd[3] BestEnd[4] BestEnd[5] BestEnd[6] For each position, we perform a binary search to update BestEnd. Therefore, the running time is O(n log n).
  • 14. Sum of Subset Problem • Problem: – Suppose you are given N positive integer numbers A[1…N] and it is required to produce another number K using a subset of A[1..N] numbers. How can it be done using Dynamic programming approach? • Example: N = 6, A[1..N] = {2, 5, 8, 12, 6, 14}, K = 19 Result: 2 + 5 + 12 = 19 14
  • 15. Coin Change Problem • Suppose you are given n types of coin - C1, C2, … , Cn coin, and another number K. • Is it possible to make K using above types of coin? – Number of each coin is infinite – Number of each coin is finite • Find minimum number of coin that is required to make K? – Number of each coin is infinite – Number of each coin is finite
  • 16. 16 Maximum-sum interval • Given a sequence of real numbers a1a2…an , find a consecutive subsequence with the maximum sum. 9 –3 1 7 –15 2 3 –4 2 –7 6 –2 8 4 -9 For each position, we can compute the maximum-sum interval starting at that position in O(n) time. Therefore, a naive algorithm runs in O(n2 ) time. Try Yourself
  • 17. 17 The Knapsack Problem • The 0-1 knapsack problem – A thief robbing a store finds n items: the i-th item is worth vi dollars and weights wi pounds (vi, wi integers) – The thief can only carry W pounds in his knapsack – Items must be taken entirely or left behind – Which items should the thief take to maximize the value of his load? • The fractional knapsack problem – Similar to above – The thief can take fractions of items
  • 18. 18 The 0-1 Knapsack Problem • Thief has a knapsack of capacity W • There are n items: for i-th item value vi and weight wi • Goal: – find xi such that for all xi = {0, 1}, i = 1, 2, .., n  wixi  W and  xivi is maximum
  • 19. 19 50 0-1 Knapsack - Greedy Strategy • E.g.: 10 20 30 50 Item 1 Item 2 Item 3 $60 $100 $120 10 20 $60 $100 + $160 50 20 $100 $120 + $220 30 $6/pound $5/pound $4/pound • None of the solutions involving the greedy choice (item 1) leads to an optimal solution – The greedy choice property does not hold
  • 20. 20 0-1 Knapsack - Dynamic Programming • P(i, w) – the maximum profit that can be obtained from items 1 to i, if the knapsack has size w • Case 1: thief takes item i P(i, w) = • Case 2: thief does not take item i P(i, w) = vi + P(i - 1, w-wi) P(i - 1, w)
  • 21. 21 0-1 Knapsack - Dynamic Programming 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0: n 1 w - wi W i-1 0 first P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) } Item i was taken Item i was not taken i w second
  • 22. 22 P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) } 0 0 0 0 0 0 0 0 0 0 Item Weight Value 1 2 12 2 1 10 3 3 20 4 2 15 0 1 2 3 4 5 1 2 3 4 W = 5 0 12 12 12 12 10 12 22 22 22 10 12 22 30 32 10 15 25 30 37 P(1, 1) = P(1, 2) = P(1, 3) = P(1, 4) = P(1, 5) = P(2, 1)= P(2, 2)= P(2, 3)= P(2, 4)= P(2, 5)= P(3, 1)= P(3, 2)= P(3, 3)= P(3, 4)= P(3, 5)= P(4, 1)= P(4, 2)= P(4, 3)= P(4, 4)= P(4, 5)= max{12+0, 0} = 12 max{12+0, 0} = 12 max{12+0, 0} = 12 max{12+0, 0} = 12 max{10+0, 0} = 10 max{10+0, 12} = 12 max{10+12, 12} = 22 max{10+12, 12} = 22 max{10+12, 12} = 22 P(2,1) = 10 P(2,2) = 12 max{20+0, 22}=22 max{20+10,22}=30 max{20+12,22}=32 P(3,1) = 10 max{15+0, 12} = 15 max{15+10, 22}=25 max{15+12, 30}=30 max{15+22, 32}=37 0 P(0, 1) = 0 Example:
  • 23. 23 Reconstructing the Optimal Solution 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 1 2 3 4 0 12 12 12 12 10 12 22 22 22 10 12 22 30 32 10 15 25 30 37 0 • Start at P(n, W) • When you go left-up  item i has been taken • When you go straight up  item i has not been taken • Item 4 • Item 2 • Item 1
  • 24. 24 Overlapping Subproblems 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0: n 1 W i-1 0 P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) } i w E.g.: all the subproblems shown in grey may depend on P(i-1, w)
  • 25. 25 Longest Common Subsequence (LCS) Application: comparison of two DNA strings Ex: X= {A B C B D A B }, Y= {B D C A B A} Longest Common Subsequence: X = A B C B D A B Y = B D C A B A Brute force algorithm would compare each subsequence of X with the symbols in Y
  • 26. 26 Longest Common Subsequence • Given two sequences X = x1, x2, …, xm Y = y1, y2, …, yn find a maximum length common subsequence (LCS) of X and Y • E.g.: X = A, B, C, B, D, A, B • Subsequences of X: – A subset of elements in the sequence taken in order A, B, D, B, C, D, B, etc.
  • 27. 27 Example X = A, B, C, B, D, A, B X = A, B, C, B, D, A, B Y = B, D, C, A, B, A Y = B, D, C, A, B, A  B, C, B, A and B, D, A, B are longest common subsequences of X and Y (length = 4)  B, C, A, however is not a LCS of X and Y
  • 28. 28 Brute-Force Solution • For every subsequence of X, check whether it’s a subsequence of Y • There are 2m subsequences of X to check • Each subsequence takes (n) time to check – scan Y for first letter, from there scan for second, and so on • Running time: (n2m )
  • 29. 29 LCS Algorithm • First we’ll find the length of LCS. Later we’ll modify the algorithm to find LCS itself. • Define Xi, Yj to be the prefixes of X and Y of length i and j respectively • Define c[i,j] to be the length of LCS of Xi and Yj • Then the length of LCS of X and Y will be c[m,n]           otherwise ]) , 1 [ ], 1 , [ max( ], [ ] [ if 1 ] 1 , 1 [ ] , [ j i c j i c j y i x j i c j i c
  • 30. 30 LCS recursive solution • We start with i = j = 0 (empty substrings of x and y) • Since X0 and Y0 are empty strings, their LCS is always empty (i.e. c[0,0] = 0) • LCS of empty string and any other string is empty, so for every i and j: c[0, j] = c[i,0] = 0           otherwise ]) , 1 [ ], 1 , [ max( ], [ ] [ if 1 ] 1 , 1 [ ] , [ j i c j i c j y i x j i c j i c
  • 31. 31 LCS recursive solution • When we calculate c[i,j], we consider two cases: • First case: x[i]=y[j]: – one more symbol in strings X and Y matches, so the length of LCS Xi and Yj equals to the length of LCS of smaller strings Xi-1 and Yi-1 , plus 1           otherwise ]) , 1 [ ], 1 , [ max( ], [ ] [ if 1 ] 1 , 1 [ ] , [ j i c j i c j y i x j i c j i c
  • 32. 32 LCS recursive solution • Second case: x[i] != y[j] – As symbols don’t match, our solution is not improved, and the length of LCS(Xi , Yj) is the same as before (i.e. maximum of LCS(Xi, Yj-1) and LCS(Xi-1,Yj)           otherwise ]) , 1 [ ], 1 , [ max( ], [ ] [ if 1 ] 1 , 1 [ ] , [ j i c j i c j y i x j i c j i c Why not just take the length of LCS(Xi-1, Yj-1) ?
  • 33. 33 3. Computing the Length of the LCS 0 if i = 0 or j = 0 c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi  yj 0 0 0 0 0 0 0 0 0 0 0 yj: xm y1 y2 yn x1 x2 xi j i 0 1 2 n m 1 2 0 first second
  • 34. 34 Additional Information 0 if i,j = 0 c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi  yj 0 0 0 0 0 0 0 0 0 0 0 yj: D A C F A B xi j i 0 1 2 n m 1 2 0 A matrix b[i, j]: • For a subproblem [i, j] it tells us what choice was made to obtain the optimal value • If xi = yj b[i, j] = “ ” • Else, if c[i - 1, j] ≥ c[i, j-1] b[i, j] = “  ” else b[i, j] = “  ” 3 3 C D b & c: c[i,j-1] c[i-1,j]
  • 35. 35 LCS-LENGTH(X, Y, m, n) 1. for i 1 ← to m 2. do c[i, 0] 0 ← 3. for j 0 ← to n 4. do c[0, j] 0 ← 5. for i 1 ← to m 6. do for j 1 ← to n 7. do if xi = yj 8. then c[i, j] c[i - 1, j - 1] + 1 ← 9. b[i, j ] “ ” ← 10. else if c[i - 1, j] ≥ c[i, j - 1] 11. then c[i, j] c[i - 1, j] ← 12. b[i, j] “ ” ← ↑ 13. else c[i, j] c[i, j - 1] ← 14. b[i, j] “ ” ← ← 15.return c and b The length of the LCS if one of the sequences is empty is zero Case 1: xi = yj Case 2: xi  yj Running time: (mn)
  • 36. 36 Example X = A, B, C, B, D, A Y = B, D, C, A, B, A 0 if i = 0 or j = 0 c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi  yj 0 1 2 6 3 4 5 yj B D A C A B 5 1 2 0 3 4 6 7 D A B xi C B A B 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0  0  0 1 1 1 1 1 1  1 2 2  1  1 2 2  2  2 1  1  2  2 3 3  1 2  2  2  3  3  1  2  3  2 3 4 1  2  2  3 4  4 If xi = yj b[i, j] = “ ” Else if c[i - 1, j] ≥ c[i, j-1] b[i, j] = “  ” else b[i, j] = “  ”
  • 37. 37 4. Constructing a LCS • Start at b[m, n] and follow the arrows • When we encounter a “ “ in b[i, j]  xi = yj is an element of the LCS 0 1 2 6 3 4 5 yj B D A C A B 5 1 2 0 3 4 6 7 D A B xi C B A B 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0  0  0 1 1 1 1 1 1  1 2 2  1  1 2 2  2  2 1  1  2  2 3 3  1 2  2  2  3  3  1  2  3  2 3 4 1  2  2  3 4  4
  • 38. 38 PRINT-LCS(b, X, i, j) 1. if i = 0 or j = 0 2. then return 3. if b[i, j] = “ ” 4. then PRINT-LCS(b, X, i - 1, j - 1) 5. print xi 6. elseif b[i, j] = “ ” ↑ 7. then PRINT-LCS(b, X, i - 1, j) 8. else PRINT-LCS(b, X, i, j - 1) Initial call: PRINT-LCS(b, X, length[X], length[Y]) Running time: (m + n)
  • 39. 39 Improving the Code • If we only need the length of the LCS – LCS-LENGTH works only on two rows of c at a time • The row being computed and the previous row – We can reduce the asymptotic space requirements by storing only these two rows
  • 40. 40 LCS Algorithm Running Time • LCS algorithm calculates the values of each entry of the array c[m,n] • So what is the running time? O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array
  • 41. Rock Climbing Problem • A rock climber wants to get from the bottom of a rock to the top by the safest possible path. • At every step, he reaches for handholds above him; some holds are safer than other. • From every place, he can only reach a few nearest handholds.
  • 42. Rock climbing (cont) At every step our climber can reach exactly three handholds: above, above and to the right and above and to the left. Suppose we have a wall instead of the rock. There is a table of “danger ratings” provided. The “Danger” of a path is the sum of danger ratings of all handholds on the path. 5 3 4 2
  • 43. Rock Climbing (cont) •We represent the wall as a table. •Every cell of the table contains the danger rating of the corresponding block. 2 8 9 5 8 4 4 6 2 3 5 7 5 6 1 3 2 5 4 8 The obvious greedy algorithm does not give an optimal solution. 2 2 5 5 4 4 2 2 The rating of this path is 13. The rating of an optimal path is 12. 4 4 1 1 2 2 5 5 However, we can solve this problem by a dynamic programming strategy in polynomial time.
  • 44. Idea: once we know the rating of a path to every handhold on a layer, we can easily compute the ratings of the paths to the holds on the next layer. For the top layer, that gives us an answer to the problem itself.
  • 45. For every handhold, there is only one “path” rating. Once we have reached a hold, we don’t need to know how we got there to move to the next level. This is called an “optimal substructure” property. Once we know optimal solutions to subproblems, we can compute an optimal solution to the problem itself.
  • 46. Recursive solution: To find the best way to get to stone j in row i, check the cost of getting to the stones • (i-1,j-1), • (i-1,j) and • (i-1,j+1), and take the cheapest. Problem: each recursion level makes three calls for itself, making a total of 3n calls – too much!
  • 47. Solution - memorization We query the value of A(i,j) over and over again. Instead of computing it each time, we can compute it once, and remember the value. A simple recurrence allows us to compute A(i,j) from values below.
  • 48. Dynamic programming • Step 1: Describe an array of values you want to compute. • Step 2: Give a recurrence for computing later values from earlier (bottom-up). • Step 3: Give a high-level program. • Step 4: Show how to use values in the array to compute an optimal solution.
  • 49. Rock climbing: step 1. • Step 1: Describe an array of values you want to compute. • For 1  i  n and 1  j  m, define A(i,j) to be the cumulative rating of the least dangerous path from the bottom to the hold (i,j). • The rating of the best path to the top will be the minimal value in the last row of the array.
  • 50. Rock climbing: step 2. • Step 2: Give a recurrence for computing later values from earlier (bottom-up). • Let C(i,j) be the rating of the hold (i,j). There are three cases for A(i,j): • Left (j=1): C(i,j)+min{A(i-1,j),A(i-1,j+1)} • Right (j=m): C(i,j)+min{A(i-1,j-1),A(i-1,j)} • Middle: C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)} • For the first row (i=1), A(i,j)=C(i,j).
  • 51. Rock climbing: simpler step 2 • Add initialization row: A(0,j)=0. No danger to stand on the ground. • Add two initialization columns: A(i,0)=A(i,m+1)=. It is infinitely dangerous to try to hold on to the air where the wall ends. • Now the recurrence becomes, for every i,j: A(i,j) = C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)}
  • 52. Rock climbing: example C(i,j): A(i,j): ij 0 1 2 3 4 5 6 0 1 2 3 4 ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1   2   3   4   Initialization: A(i,0)=A(i,m+1)=, A(0,j)=0 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8
  • 53. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2   3   4   The values in the first row are the same as C(i,j). ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1   2   3   4  
  • 54. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): A(2,1)=5+min{,3,2}=7. ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7  3   4  
  • 55. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): A(2,1)=5+min{,3,2}=7. A(2,2)=7+min{3,2,5}=9 ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9  3   4  
  • 56. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): A(2,1)=5+min{,3,2}=7. A(2,2)=7+min{3,2,5}=9 A(2,3)=5+min{2,5,4}=7. ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7  3   4  
  • 57. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): The best cumulative rating on the second row is 5. ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3   4  
  • 58. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): The best cumulative rating on the third row is 7. ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  
  • 59. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): The best cumulative rating on the last row is 12. ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15 
  • 60. Rock climbing: example 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): The best cumulative rating on the last row is 12. ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15  So the rating of the best path to the top is 12.
  • 61. Rock climbing example: step 4 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15  To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).
  • 62. Rock climbing example: step 4 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15  The last hold was (4,4). To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).
  • 63. Rock climbing example: step 4 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15  The hold before the last was (3,4), since min{13,7,8} was 7. To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).
  • 64. Rock climbing example: step 4 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j). ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15  The hold before that was (2,5), since min{7,10,5} was 5.
  • 65. Rock climbing example: step 4 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j). ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15  Finally, the first hold was (1,4), since min{5,4,8} was 4.
  • 66. Rock climbing example: step 4 3 2 5 4 8 5 7 5 6 1 4 4 6 2 3 2 8 9 5 8 C(i,j): A(i,j): We are done! ij 0 1 2 3 4 5 6 0  0 0 0 0 0  1  3 2 5 4 8  2  7 9 7 10 5  3  11 11 13 7 8  4  13 19 16 12 15 
  • 67. Printing out the solution recursively PrintBest(A,i,j) // Printing the best path ending at (i,j) if (i==0) OR (j=0) OR (j=m+1) return; if (A[i-1,j-1]<=A[i-1,j]) AND (A[i-1,j-1]<=A[i-1,j+1]) PrintBest(A,i-1,j-1); elseif (A[i-1,j]<=A[i-1,j-1]) AND (A[i-1,j]<=A[i-1,j+1]) PrintBest(A,i-1,j); elseif (A[i-1,j+1]<=A[i-1,j-1]) AND (A[i-1,j+1]<=A[i-1,j]) PrintBest(A,i-1,j+1); printf(i,j)