SlideShare a Scribd company logo
Introduction
CSE 221: Algorithms
Dynamic Programming
Mumit Khan
Fatema Tuz Zohora
Computer Science and Engineering
BRAC University
References
1 Jon Kleinberg and Éva Tardos, Algorithm Design. Pearson Education, 2006.
2 T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, Second Edition.
The MIT Press, September 2001.
Last modified: November 27, 2012
This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Licensed under CSE 221: Algorithms 1 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 2 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Licensed under CSE 221: Algorithms 3 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Licensed under CSE 221: Algorithms 3 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Licensed under CSE 221: Algorithms 3 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Unlike Greedy algorithms, implicitly solve all subproblems.
Licensed under CSE 221: Algorithms 3 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Unlike Greedy algorithms, implicitly solve all subproblems.
Motivating the case for DP with Memoization – a top-down
technique, and then moving on to Dynamic Programming – a
bottom-up technique.
Licensed under CSE 221: Algorithms 3 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Unlike Greedy algorithms, implicitly solve all subproblems.
Motivating the case for DP with Memoization – a top-down
technique, and then moving on to Dynamic Programming – a
bottom-up technique.
 Greedy is evil, Dynamic Programming is good. – Prof. Jeff
Erickson, University of Illinois, Urbana-Champaign.
Licensed under CSE 221: Algorithms 3 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 4 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursive solution to Fibonacci numbers
Definition (Fibonacci numbers)
The Fibonacci numbers are given by the following sequence:
h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i
Licensed under CSE 221: Algorithms 5 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursive solution to Fibonacci numbers
Definition (Fibonacci numbers)
The Fibonacci numbers are given by the following sequence:
h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i
and described by the following recurrence.
Fib(n) =
(
n if n = 0 or 1
Fib(n − 1) + Fib(n − 2) if n ≥ 2
Licensed under CSE 221: Algorithms 5 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursive solution to Fibonacci numbers
Definition (Fibonacci numbers)
The Fibonacci numbers are given by the following sequence:
h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i
and described by the following recurrence.
Fib(n) =
(
n if n = 0 or 1
Fib(n − 1) + Fib(n − 2) if n ≥ 2
Straightforward recursive algorithm
Fibonacci(n)  n ≥ 0
1 if n = 0 or n = 1
2 then return n
3 else return fibonacci(n − 1) + fibonacci(n − 2)
Licensed under CSE 221: Algorithms 5 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Licensed under CSE 221: Algorithms 6 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Licensed under CSE 221: Algorithms 6 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Licensed under CSE 221: Algorithms 6 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Licensed under CSE 221: Algorithms 6 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Complexity
This recursive algorithm for Fibonacci numbers has exponential
running time!
Licensed under CSE 221: Algorithms 6 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Complexity
This recursive algorithm for Fibonacci numbers has exponential
running time!
To be precise, T(n) = O(ϕn) , where ϕ = 1+
√
5
2 is the golden
ratio.
Licensed under CSE 221: Algorithms 6 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
 Note how fib(n − 2) and fib(n − 3) are each being computed
twice.
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
 In fact, computing fib(n − 2) involves computing a whole
subtree.
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
 Likewise for computing fib(n − 3).
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)?
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)? fib(n − 3)?
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)? fib(n − 3)?
What if we compute and save the result of fib(i) for i = {2, 3, . . , n} the
first time, and then re-use it each time afterward?
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)? fib(n − 3)?
What if we compute and save the result of fib(i) for i = {2, 3, . . , n} the
first time, and then re-use it each time afterward?
Ah, we’ve just (re)discovered Memo(r)ization!
Licensed under CSE 221: Algorithms 7 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Licensed under CSE 221: Algorithms 8 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
Licensed under CSE 221: Algorithms 8 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
1 At each step of computation, first see if the solution to the
subproblem has already been found and saved.
Licensed under CSE 221: Algorithms 8 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
1 At each step of computation, first see if the solution to the
subproblem has already been found and saved.
2 If so, simply return the solution.
Licensed under CSE 221: Algorithms 8 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
1 At each step of computation, first see if the solution to the
subproblem has already been found and saved.
2 If so, simply return the solution.
3 If not, compute the solution, and save it before returning the
solution.
Licensed under CSE 221: Algorithms 8 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n)  n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n  Our base conditions.
3 if F[n] is empty  No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Licensed under CSE 221: Algorithms 9 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n)  n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n  Our base conditions.
3 if F[n] is empty  No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F?
Licensed under CSE 221: Algorithms 9 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n)  n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n  Our base conditions.
3 if F[n] is empty  No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F? It’s used store the values of the
intermediate results, and must be initialized by the caller to
all empty.
Licensed under CSE 221: Algorithms 9 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n)  n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n  Our base conditions.
3 if F[n] is empty  No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F? It’s used store the values of the
intermediate results, and must be initialized by the caller to
all empty.
What is an appropriate sentinel to indicate that
F[i], 0 ≤ i ≤ n has not been solved yet (i.e., empty)?
Licensed under CSE 221: Algorithms 9 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n)  n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n  Our base conditions.
3 if F[n] is empty  No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F? It’s used store the values of the
intermediate results, and must be initialized by the caller to
all empty.
What is an appropriate sentinel to indicate that
F[i], 0 ≤ i ≤ n has not been solved yet (i.e., empty)? Use −1,
which is guaranteed to be an invalid value.
Licensed under CSE 221: Algorithms 9 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized . . . Fibonacci numbers (continued)
Fibonacci(n)  n ≥ 0
 Allocate an array F[0 . . n] to save results (length[F] = n + 1).
1 for i ← 0 to n
2 do F[i] ← −1  No solution computed for i yet (sentinel)
3 return m-fibonacci(F, n)
Licensed under CSE 221: Algorithms 10 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized . . . Fibonacci numbers (continued)
Fibonacci(n)  n ≥ 0
 Allocate an array F[0 . . n] to save results (length[F] = n + 1).
1 for i ← 0 to n
2 do F[i] ← −1  No solution computed for i yet (sentinel)
3 return m-fibonacci(F, n)
M-Fibonacci(F, n)  n ≥ 0, F = [0 . . n]
1 if n ≤ 1
2 then return n
3 if F[n] = −1  No saved solution found for n.
4 then F[n] ← m-fibonacci(F, n − 1) + m-fibonacci(F, n − 2)
5 return F[n]
Licensed under CSE 221: Algorithms 10 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoized . . . Fibonacci numbers (continued)
Fibonacci(n)  n ≥ 0
 Allocate an array F[0 . . n] to save results (length[F] = n + 1).
1 for i ← 0 to n
2 do F[i] ← −1  No solution computed for i yet (sentinel)
3 return m-fibonacci(F, n)
M-Fibonacci(F, n)  n ≥ 0, F = [0 . . n]
1 if n ≤ 1
2 then return n
3 if F[n] = −1  No saved solution found for n.
4 then F[n] ← m-fibonacci(F, n − 1) + m-fibonacci(F, n − 2)
5 return F[n]
Running time
Each element F[2] . . . F[n] is filled in just once in Θ(1) time, so
T(n) = Θ(n) .
Licensed under CSE 221: Algorithms 10 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
What are the drawbacks, if any, of memoization?
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
What are the drawbacks, if any, of memoization?
Would all recursive algorithms benefit from memoization?
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
What are the drawbacks, if any, of memoization?
Would all recursive algorithms benefit from memoization?
For example, would the recursive algorithm to compute the
factorial of a number benefit from memoization?
Licensed under CSE 221: Algorithms 11 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 12 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Licensed under CSE 221: Algorithms 13 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
Licensed under CSE 221: Algorithms 13 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
This bottom up construction gives us the first Dynamic
Programming algorithm.
Licensed under CSE 221: Algorithms 13 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
This bottom up construction gives us the first Dynamic
Programming algorithm.
Dynamic programming algorithm for fibonacci numbers
Fibonacci(n)  n ≥ 0
1 F[0] ← 0
2 F[1] ← 1
3 for i ← 2 to n
4 do F[i] ← F[i − 1] + F[i − 2]
5 return F[n]
Licensed under CSE 221: Algorithms 13 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
This bottom up construction gives us the first Dynamic
Programming algorithm.
Dynamic programming algorithm for fibonacci numbers
Fibonacci(n)  n ≥ 0
1 F[0] ← 0
2 F[1] ← 1
3 for i ← 2 to n
4 do F[i] ← F[i − 1] + F[i − 2]
5 return F[n]
T(n) = Θ(n)
Licensed under CSE 221: Algorithms 13 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively.
Licensed under CSE 221: Algorithms 14 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
Licensed under CSE 221: Algorithms 14 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Licensed under CSE 221: Algorithms 14 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Licensed under CSE 221: Algorithms 14 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Observations
1 Must ensure that the recurrence is correct of course!
Licensed under CSE 221: Algorithms 14 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Observations
1 Must ensure that the recurrence is correct of course!
2 Need a “place” to store the solutions to subproblems, and
need to look these solutions up when needed.
Licensed under CSE 221: Algorithms 14 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Observations
1 Must ensure that the recurrence is correct of course!
2 Need a “place” to store the solutions to subproblems, and
need to look these solutions up when needed. Typically, but
not always, a multi-dimensional table is used as storage.
Licensed under CSE 221: Algorithms 14 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 15 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted interval scheduling problem
Definition (Weighted interval scheduling problem)
Given a set of schedules I = {Ii }, with associated weights
W = {wi }, find A ⊆ I such that the members of A are
non-conflicting and the total weight
P
i∈A wi is maximized.
Example (an instance of weighted interval problem)
|A| =???,
P
i∈A wi =???.
Licensed under CSE 221: Algorithms 16 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted interval scheduling problem
Definition (Weighted interval scheduling problem)
Given a set of schedules I = {Ii }, with associated weights
W = {wi }, find A ⊆ I such that the members of A are
non-conflicting and the total weight
P
i∈A wi is maximized.
Example (using an optimal strategy)
|A| = 1,
P
i∈A wi = 3.
Licensed under CSE 221: Algorithms 16 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted interval scheduling problem
Definition (Weighted interval scheduling problem)
Given a set of schedules I = {Ii }, with associated weights
W = {wi }, find A ⊆ I such that the members of A are
non-conflicting and the total weight
P
i∈A wi is maximized.
Example (using an optimal strategy)
|A| = 1,
P
i∈A wi = 3.
What now?
First step is to formulate a recursive solution, but first we need to
figure out what the subproblems are.
Licensed under CSE 221: Algorithms 16 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
Licensed under CSE 221: Algorithms 17 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i  j (“a natural order of the
subproblems”).
Licensed under CSE 221: Algorithms 17 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i  j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
Licensed under CSE 221: Algorithms 17 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i  j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
All we can say about ϑ is the following: interval n (the last
interval) either belongs to ϑ, or it doesn’t.
Licensed under CSE 221: Algorithms 17 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i  j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
All we can say about ϑ is the following: interval n (the last
interval) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then clearly all intervals that conflict with n are
not members of ϑ. ϑ then contains n, plus an
optimal solution to all intervals that do not
conflict with n. We now need to have a quick
way of computing list of conflicting intervals for
n.
Licensed under CSE 221: Algorithms 17 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i  j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
All we can say about ϑ is the following: interval n (the last
interval) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then clearly all intervals that conflict with n are
not members of ϑ. ϑ then contains n, plus an
optimal solution to all intervals that do not
conflict with n. We now need to have a quick
way of computing list of conflicting intervals for
n.
If n /
∈ ϑ Then ϑ contains an optimal solution for the
intervals {i1, i2, . . , in−1}.
Licensed under CSE 221: Algorithms 17 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Example (an instance of a weighted interval problem)
 For each interval i, compute p(i), the rightmost interval among
the non-conflicting preceding intervals of i. Define p(j) = 0 if no
request i  j is disjoint from j.
Licensed under CSE 221: Algorithms 18 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Example (an instance of a weighted interval problem)
 For a given interval i, p(i) means that intervals
{p(i) + 1, p(i) + 2, . . . , i − 1} overlap with it. For example,
p(6) = 3, which means that intervals {4, 5} overlap interval 6.
Licensed under CSE 221: Algorithms 18 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Example (an instance of a weighted interval problem)
 Alternatively, intervals {1, 2, . . , p(i)} do not overlap interval i.
For example, p(6) = 3 means that intervals {1, 2, 3} do not
overlap interval 6.
Licensed under CSE 221: Algorithms 18 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}.
Licensed under CSE 221: Algorithms 19 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
 ϑ(n) = wn + ϑ(p(n))
Licensed under CSE 221: Algorithms 19 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
 ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
Licensed under CSE 221: Algorithms 19 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
 ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
 ϑ(n) = ϑ(n − 1)
Licensed under CSE 221: Algorithms 19 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
 ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
 ϑ(n) = ϑ(n − 1)
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
Licensed under CSE 221: Algorithms 19 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
 ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
 ϑ(n) = ϑ(n − 1)
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
 ϑ(n) = max(wn + ϑ(p(n)), ϑ(n − 1))
Licensed under CSE 221: Algorithms 19 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j) is an optimal solution to the subproblem for intervals
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then:
OPT(j) = max(wj + OPT(p(j)), OPT(j − 1))
Licensed under CSE 221: Algorithms 20 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j) is an optimal solution to the subproblem for intervals
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then:
OPT(j) = max(wj + OPT(p(j)), OPT(j − 1))
Extracting the intervals in an optimal solution
The interval j is in an optimal solution OPT(j) if and only if the
first of the two options is larger than the second.
Licensed under CSE 221: Algorithms 20 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j) is an optimal solution to the subproblem for intervals
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then:
OPT(j) = max(wj + OPT(p(j)), OPT(j − 1))
Extracting the intervals in an optimal solution
The interval j is in an optimal solution OPT(j) if and only if the
first of the two options is larger than the second.
Interval j belongs to an optimal solution on the set {1, 2, . . . , j} if
and only if
wj + OPT(p(j)) ≥ OPT(j − 1)
Licensed under CSE 221: Algorithms 20 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
Licensed under CSE 221: Algorithms 21 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in
non-decreasing order of the finishing times.
Licensed under CSE 221: Algorithms 21 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in
non-decreasing order of the finishing times.
The tree grows very rapidly, leading to exponential running
time. The tree when p(j) = j − 2 for all j shows how quickly
it grows.
Licensed under CSE 221: Algorithms 21 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in
non-decreasing order of the finishing times.
The tree grows very rapidly, leading to exponential running
time. The tree when p(j) = j − 2 for all j shows how quickly
it grows.
There are many overlapping subproblems, so the obvious
choice is to memoize the recursion.
Licensed under CSE 221: Algorithms 21 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Licensed under CSE 221: Algorithms 22 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Each entry in M[j] gets filled in only once at Θ(1) time, and
there are n + 1 entries, so M-WIS(n) takes Θ(n) time.
Licensed under CSE 221: Algorithms 22 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Each entry in M[j] gets filled in only once at Θ(1) time, and
there are n + 1 entries, so M-WIS(n) takes Θ(n) time.
Of course, sorting the intervals by the finish times takes
Θ(n lg n) time.
Licensed under CSE 221: Algorithms 22 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Each entry in M[j] gets filled in only once at Θ(1) time, and
there are n + 1 entries, so M-WIS(n) takes Θ(n) time.
Of course, sorting the intervals by the finish times takes
Θ(n lg n) time.
This memoized algorithm plus sorting the intervals takes
Θ(n lg n) + Θ(n) = Θ(n lg n) time.
Licensed under CSE 221: Algorithms 22 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
The memoized algorithm only computes the optimal value,
but does not extract the intervals that make up the solution.
The key to extracting the solution is to note that item j is in
ϑ if and only if wj + M[p(j)] ≥ M[j − 1]. This provides two
ways of extracting the intervals in the optimal solution:
1 Trace back from M[n] and extract the solution by checking
which choice was made – j − 1 or p(j) – when M[j] was
included in the optimal set of intervals.
2 Whenever a choice is made between two options, save in
pred[j], the predecessor pointer, the choice that was made
between j − 1 and p(j).
Licensed under CSE 221: Algorithms 23 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The first way recursively extracts an optimal set of intervals
for a problem size of 1 ≤ j ≤ n.
Calling WIS-find-solution(n) extracts all the intervals in
the optimal solution.
Licensed under CSE 221: Algorithms 24 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The first way recursively extracts an optimal set of intervals
for a problem size of 1 ≤ j ≤ n.
Calling WIS-find-solution(n) extracts all the intervals in
the optimal solution.
WIS-find-solution(j)
1 if j = 0
2 then Output nothing
3 else
4 if wj + M[p(j)] ≥ M[j − 1]
5 then Output j
6 WIS-find-solution(p(j))
7 else WIS-find-solution(j − 1)
Licensed under CSE 221: Algorithms 24 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The second way requires that M-WIS use an auxiliary array
pred[0 . . n] to save the predecessor of each interval in the
solution.
Initialize pred[j] = 0 for all 0 ≤ j ≤ n.
Licensed under CSE 221: Algorithms 25 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The second way requires that M-WIS use an auxiliary array
pred[0 . . n] to save the predecessor of each interval in the
solution.
Initialize pred[j] = 0 for all 0 ≤ j ≤ n.
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then if wj + M-WIS(p(j))  M-WIS(j − 1))
5 then M[j] ← wj + M-WIS(p(j)
6 pred[j] ← p(j)
7 else M[j] ← M-WIS(j − 1)
8 pred[j] ← j − 1
9 return M[j]
Licensed under CSE 221: Algorithms 25 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
Now that we have pred[j] filled in, we start from M[n] and work
backwards.
1 If pred[j] = p(j), then we did add the jth interval in the final
solution, and we continue with pred[j] ← p(j).
2 if pred[j] 6= p(j), then we did not add the jth interval in the
final solution, and we continue with pred[j] ← j − 1.
Licensed under CSE 221: Algorithms 26 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
Now that we have pred[j] filled in, we start from M[n] and work
backwards.
1 If pred[j] = p(j), then we did add the jth interval in the final
solution, and we continue with pred[j] ← p(j).
2 if pred[j] 6= p(j), then we did not add the jth interval in the
final solution, and we continue with pred[j] ← j − 1.
WIS-find-solution(j)
1 if j = 0
2 then Output nothing
3 else
4 if pred[j] = p(j)
5 then Output j
6 WIS-find-solution(p(j))
7 else WIS-find-solution(j − 1)
Licensed under CSE 221: Algorithms 26 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
Now that we have pred[j] filled in, we start from M[n] and work
backwards.
1 If pred[j] = p(j), then we did add the jth interval in the final
solution, and we continue with pred[j] ← p(j).
2 if pred[j] 6= p(j), then we did not add the jth interval in the
final solution, and we continue with pred[j] ← j − 1.
WIS-find-solution(j)
1 if j = 0
2 then Output nothing
3 else
4 if pred[j] = p(j)
5 then Output j
6 WIS-find-solution(p(j))
7 else WIS-find-solution(j − 1)
Can you come up with an iterative version?
Licensed under CSE 221: Algorithms 26 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
Licensed under CSE 221: Algorithms 27 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
We can build the table M[j] bottom-up, starting from the
base case of j = 0, up to n by using the memoized recursive
formulation: M[j] = max(wj + M[p(j)], M[j − 1]).
Licensed under CSE 221: Algorithms 27 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
We can build the table M[j] bottom-up, starting from the
base case of j = 0, up to n by using the memoized recursive
formulation: M[j] = max(wj + M[p(j)], M[j − 1]).
Dynamic programming algorithm
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do M[j] = max(wj + M[p(j)], M[j − 1])
4 return M[n]
Licensed under CSE 221: Algorithms 27 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
We can build the table M[j] bottom-up, starting from the
base case of j = 0, up to n by using the memoized recursive
formulation: M[j] = max(wj + M[p(j)], M[j − 1]).
Dynamic programming algorithm
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do M[j] = max(wj + M[p(j)], M[j − 1])
4 return M[n]
T(n) = Θ(n)
Licensed under CSE 221: Algorithms 27 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do if wj + M[p(j)]  M[j − 1]
4 then M[j] = wj + M[p(j)]
5 pred[j] = p(j)
6 else M[j] = M[j − 1]
7 pred[j] = j − 1
8 return M[n]
Licensed under CSE 221: Algorithms 28 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do if wj + M[p(j)]  M[j − 1]
4 then M[j] = wj + M[p(j)]
5 pred[j] = p(j)
6 else M[j] = M[j − 1]
7 pred[j] = j − 1
8 return M[n]
WIS-find-solution(j)
1 j ← n
2 while j  0
3 do if pred[j] = p(j)
4 then Output j
5 j ← pred[j]
Licensed under CSE 221: Algorithms 28 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Optimal value: 8
Optimal solution: {5, 3, 1}
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Weighted Interval Scheduling DP algorithm in action
Optimal value: 8
Optimal solution: {1, 3, 5}
Licensed under CSE 221: Algorithms 29 / 53
Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
Licensed under CSE 221: Algorithms 30 / 53
Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
2 What if we didn’t sort the requests at all? Would it still work?
Licensed under CSE 221: Algorithms 30 / 53
Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
2 What if we didn’t sort the requests at all? Would it still work?
3 If all the weights are the same, what does this problem
become?
Licensed under CSE 221: Algorithms 30 / 53
Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
2 What if we didn’t sort the requests at all? Would it still work?
3 If all the weights are the same, what does this problem
become? Can you solve it using DP?
Licensed under CSE 221: Algorithms 30 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 31 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Licensed under CSE 221: Algorithms 32 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S vi , subject to
P
i∈S wi ≤ W .
Licensed under CSE 221: Algorithms 32 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S vi , subject to
P
i∈S wi ≤ W .
Maximum weight: W = 4 kg
Licensed under CSE 221: Algorithms 32 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S vi , subject to
P
i∈S wi ≤ W .
Maximum weight: W = 4 kg
Optimal solution: items B and C Benefit: 370
Licensed under CSE 221: Algorithms 32 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Licensed under CSE 221: Algorithms 33 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
Licensed under CSE 221: Algorithms 33 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Licensed under CSE 221: Algorithms 33 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
Licensed under CSE 221: Algorithms 33 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then the optimal solution contains n, plus an
optimal solution for the other n − 1 items, but
with a reduced maximum weight of W − wn.
Licensed under CSE 221: Algorithms 33 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then the optimal solution contains n, plus an
optimal solution for the other n − 1 items, but
with a reduced maximum weight of W − wn.
If n /
∈ ϑ Then ϑ simply contains an optimal solution for
the first n − 1 items, with the maximum allowed
weight W remaining unchanged.
Licensed under CSE 221: Algorithms 33 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then the optimal solution contains n, plus an
optimal solution for the other n − 1 items, but
with a reduced maximum weight of W − wn.
If n /
∈ ϑ Then ϑ simply contains an optimal solution for
the first n − 1 items, with the maximum allowed
weight W remaining unchanged.
We have two parameters for each subproblem – the items S,
and the maximum allowed weight W .
Licensed under CSE 221: Algorithms 33 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn  W =⇒ n /
∈ ϑ.
 ϑ(n, W ) = ϑ(n − 1, W )
Licensed under CSE 221: Algorithms 34 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn  W =⇒ n /
∈ ϑ.
 ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
 ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
Licensed under CSE 221: Algorithms 34 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn  W =⇒ n /
∈ ϑ.
 ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
 ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
If n /
∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to
the subproblem consisting of the intervals {1, 2, . . . , n − 1}:
 ϑ(n, W ) = ϑ(n − 1, W )
Licensed under CSE 221: Algorithms 34 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn  W =⇒ n /
∈ ϑ.
 ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
 ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
If n /
∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to
the subproblem consisting of the intervals {1, 2, . . . , n − 1}:
 ϑ(n, W ) = ϑ(n − 1, W )
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
Licensed under CSE 221: Algorithms 34 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn  W =⇒ n /
∈ ϑ.
 ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
 ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
If n /
∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to
the subproblem consisting of the intervals {1, 2, . . . , n − 1}:
 ϑ(n, W ) = ϑ(n − 1, W )
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
 ϑ(n, W ) = max(vn + ϑ(n − 1, W − wn), ϑ(n − 1, W ))
Licensed under CSE 221: Algorithms 34 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j, w) is an optimal solution to the subproblem for items
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, and with a maximum
allowed weight of w, then:
OPT(j, w) =





OPT(j − 1, w) if wj  w,
max(vj + OPT(j − 1, w − wj),
OPT(j − 1, w)) otherwise.
Licensed under CSE 221: Algorithms 35 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j, w) is an optimal solution to the subproblem for items
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, and with a maximum
allowed weight of w, then:
OPT(j, w) =





OPT(j − 1, w) if wj  w,
max(vj + OPT(j − 1, w − wj),
OPT(j − 1, w)) otherwise.
Extracting the items in an optimal solution
The item j is in an optimal solution OPT(j, w) if and only if the
first of the two options is larger than the second.
vj + OPT(j − 1, w − wj) ≥ OPT(j − 1, w)
Licensed under CSE 221: Algorithms 35 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj  w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
Licensed under CSE 221: Algorithms 36 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj  w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
The initial call is Knapsack(n, W ).
Licensed under CSE 221: Algorithms 36 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj  w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
The initial call is Knapsack(n, W ).
The tree grows very rapidly, leading to exponential running
time.
Licensed under CSE 221: Algorithms 36 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj  w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
The initial call is Knapsack(n, W ).
The tree grows very rapidly, leading to exponential running
time.
There are many overlapping subproblems, so the obvious
choice is to memoize the recursion.
Licensed under CSE 221: Algorithms 36 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Licensed under CSE 221: Algorithms 37 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Each entry in M[j, w] gets filled in only once at Θ(1) time,
and there are n + 1 × W + 1 entries, so M-Knapsack(n, W )
takes Θ(nW ) time.
Licensed under CSE 221: Algorithms 37 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Each entry in M[j, w] gets filled in only once at Θ(1) time,
and there are n + 1 × W + 1 entries, so M-Knapsack(n, W )
takes Θ(nW ) time.
Is this a linear-time algorithm?
Licensed under CSE 221: Algorithms 37 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Each entry in M[j, w] gets filled in only once at Θ(1) time,
and there are n + 1 × W + 1 entries, so M-Knapsack(n, W )
takes Θ(nW ) time.
Is this a linear-time algorithm?
This is an example of a pseudo-polynomial problem, since it
depends on another parameter W that is independent of the
problem size.
Licensed under CSE 221: Algorithms 37 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
Knapsack(n, W )
1 for i ← 0 to n  no remaining capacity
2 do M[i, 0] ← 0
3 for w ← 0 to W  no item to choose from
4 do M[0, w] ← 0
5 for j ← 1 to n
6 do for w ← 1 to W
7 do if wj  w //we cannot take object j
8 then M[j, w] = M[j − 1, w]
9 else M[j, w] ← max(vj + M[j − 1, w − wj],
M[j − 1, w])
10 return M[n, W ]
Licensed under CSE 221: Algorithms 38 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack recursive algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 39 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack recursive algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 39 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack DP algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 40 / 53
Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack DP algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 40 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Licensed under CSE 221: Algorithms 41 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S wi , subject to
P
i∈S wi ≤ W .
Licensed under CSE 221: Algorithms 41 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S wi , subject to
P
i∈S wi ≤ W .
How is this similar to the 0/1 Knapsack problem?
Licensed under CSE 221: Algorithms 41 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S wi , subject to
P
i∈S wi ≤ W .
How is this similar to the 0/1 Knapsack problem?
Can you solve this using the same algorithm?
Licensed under CSE 221: Algorithms 41 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 42 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Licensed under CSE 221: Algorithms 43 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
Licensed under CSE 221: Algorithms 43 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
Licensed under CSE 221: Algorithms 43 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0
Licensed under CSE 221: Algorithms 43 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0
Solution: 3 coins.
Licensed under CSE 221: Algorithms 43 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0
Solution: 3 coins.
Questions
What is the natural search space? Does this problem have a
Dynamic Programming solution? If so, how do we develop it?
Licensed under CSE 221: Algorithms 43 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Since we’re minimizing the number of coins, the best
combination would be the minimum of these three choices.
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Since we’re minimizing the number of coins, the best
combination would be the minimum of these three choices.
By recursively solving for the best combination, this can be
generalized to |C| denominations to make change for any
amount A.
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Since we’re minimizing the number of coins, the best
combination would be the minimum of these three choices.
By recursively solving for the best combination, this can be
generalized to |C| denominations to make change for any
amount A.
What are the subproblems?
Licensed under CSE 221: Algorithms 44 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
Licensed under CSE 221: Algorithms 45 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Licensed under CSE 221: Algorithms 45 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Licensed under CSE 221: Algorithms 45 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Since we don’t know which coin would be chosen, we have to
search all |C| denominations and find the minimum.
Licensed under CSE 221: Algorithms 45 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Since we don’t know which coin would be chosen, we have to
search all |C| denominations and find the minimum.
The number of coins for 0 amount is 0.
Licensed under CSE 221: Algorithms 45 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Since we don’t know which coin would be chosen, we have to
search all |C| denominations and find the minimum.
The number of coins for 0 amount is 0.
Recurrence
OPT(p) =
(
0 if p = 0
mini:ci ≤p{1 + OPT(p − ci )} if p  0
Licensed under CSE 221: Algorithms 45 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C)  min
6 then min ← 1 + Change(n − ci , C)
Licensed under CSE 221: Algorithms 46 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C)  min
6 then min ← 1 + Change(n − ci , C)
The initial call is Change(A, C).
Licensed under CSE 221: Algorithms 46 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C)  min
6 then min ← 1 + Change(n − ci , C)
The initial call is Change(A, C).
The tree grows very rapidly, leading to exponential running
time.
Licensed under CSE 221: Algorithms 46 / 53
Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C)  min
6 then min ← 1 + Change(n − ci , C)
The initial call is Change(A, C).
The tree grows very rapidly, leading to exponential running
time.
There are many overlapping subproblems, so the obvious
choice is to memoize the recursion.
Licensed under CSE 221: Algorithms 46 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Change(n, C)
1 if n = 0
2 then return 0
3 else if M[n] is empty
4 then min ← ∞
5 for i ← 1 to |C|
6 do if ci ≤ n and
1 + M-Change(n − ci , C)  min
7 then min ← 1 + M-Change(n − ci , C)
8 M[n] ← min
9 return M[n]
Licensed under CSE 221: Algorithms 47 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Change(n, C)
1 if n = 0
2 then return 0
3 else if M[n] is empty
4 then min ← ∞
5 for i ← 1 to |C|
6 do if ci ≤ n and
1 + M-Change(n − ci , C)  min
7 then min ← 1 + M-Change(n − ci , C)
8 M[n] ← min
9 return M[n]
Each entry in M[n] gets filled in only once at Θ(|C|) time,
and there are n + 1 entries, so M-Change(n) takes
Θ(n|C|) time.
Licensed under CSE 221: Algorithms 47 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Change(n, C)
1 if n = 0
2 then return 0
3 else if M[n] is empty
4 then min ← ∞
5 for i ← 1 to |C|
6 do if ci ≤ n and
1 + M-Change(n − ci , C)  min
7 then min ← 1 + M-Change(n − ci , C)
8 M[n] ← min
9 return M[n]
Each entry in M[n] gets filled in only once at Θ(|C|) time,
and there are n + 1 entries, so M-Change(n) takes
Θ(n|C|) time.
Another pseudo-polynomial problem!
Licensed under CSE 221: Algorithms 47 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
Change(n, C)
 M = [0 . . n], S = [0 . . n]
1 M[0] ← 0 no amount to change
2 for p ← 1 to n
3 do min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ p and 1 + M[p − ci ]  min
6 then min ← 1 + M[p − ci ]
7 coin ← i
8 M[p] ← min
9 S[p] ← coin
10 return M and S
Licensed under CSE 221: Algorithms 48 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
Change(n, C)
 M = [0 . . n], S = [0 . . n]
1 M[0] ← 0 no amount to change
2 for p ← 1 to n
3 do min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ p and 1 + M[p − ci ]  min
6 then min ← 1 + M[p − ci ]
7 coin ← i
8 M[p] ← min
9 S[p] ← coin
10 return M and S
M[p] for all 0 ≤ p ≤ n – minimum number of coins needed to
change for p paisa.
S[p] for all 0 ≤ p ≤ n – the first coin chosen in computing an
optimal solution for making change for p paise.
Licensed under CSE 221: Algorithms 48 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
The S array in the algorithm “remembers” the first coin we
use when computing an optimal value for a given amount.
We go backwards using S[n] until n = 0 and find the coin that
was added at each step.
Licensed under CSE 221: Algorithms 49 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
The S array in the algorithm “remembers” the first coin we
use when computing an optimal value for a given amount.
We go backwards using S[n] until n = 0 and find the coin that
was added at each step.
Coins(S, C, n)
1 while n  0
2 do Output S[n]
3 n ← n − CS[n]
Licensed under CSE 221: Algorithms 49 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 50 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
Licensed under CSE 221: Algorithms 51 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
Licensed under CSE 221: Algorithms 51 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
If a problem has the following properties, then it’s likely to
have a dynamic programming solution.
Licensed under CSE 221: Algorithms 51 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
If a problem has the following properties, then it’s likely to
have a dynamic programming solution.
Polynomially many subproblems The total number of
subproblems should be a polynomial, or else DP
may not provide an efficient solution.
Licensed under CSE 221: Algorithms 51 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
If a problem has the following properties, then it’s likely to
have a dynamic programming solution.
Polynomially many subproblems The total number of
subproblems should be a polynomial, or else DP
may not provide an efficient solution.
Subproblem optimality If the optimal solution to the entire
problem contain optimal solution to the
subproblems, then it has the subproblem
optimality property. Also called the principle of
optimality.
Licensed under CSE 221: Algorithms 51 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Licensed under CSE 221: Algorithms 52 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Licensed under CSE 221: Algorithms 52 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Developing a Dynamic Programming solution often requires
some thought into the subproblems, especially how to find the
natural order in which to solve the subproblems.
Licensed under CSE 221: Algorithms 52 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Developing a Dynamic Programming solution often requires
some thought into the subproblems, especially how to find the
natural order in which to solve the subproblems.
Unlike Memoization, which solves only the needed
subproblems, DP solves all the subproblems, because it does it
bottom-up.
Licensed under CSE 221: Algorithms 52 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Developing a Dynamic Programming solution often requires
some thought into the subproblems, especially how to find the
natural order in which to solve the subproblems.
Unlike Memoization, which solves only the needed
subproblems, DP solves all the subproblems, because it does it
bottom-up.
Dynamic Programming on the other hand may be much more
efficient because its iterative, whereas Memoization must pay
for the (often significant) overhead due to recursion.
Licensed under CSE 221: Algorithms 52 / 53
Introduction Memoization Dynamic programming Weighted interval sched
Conclusion
Memoization is the top-down technique, and dynamic
programming is a bottom-up technique.
The key to Dynamic programming is in “intelligent” recursion
(the hard part), not in filling up the table (the easy part).
Dynamic Programming has the potential to transform
exponential-time brute-force solutions into polynomial-time
algorithms.
Greed does not pay, Dynamic Programming does!
Licensed under CSE 221: Algorithms 53 / 53
Ad

Recommended

Dynamic programming in Design Analysis and Algorithms
Dynamic programming in Design Analysis and Algorithms
NikunjGoyal20
 
Annotaed slides for dynamic programming algorithm
Annotaed slides for dynamic programming algorithm
johnathangamal27
 
L21_L27_Unit_5_Dynamic_Programming Computer Science
L21_L27_Unit_5_Dynamic_Programming Computer Science
priyanshukumarbt23cs
 
dynamic-programming unit 3 power point presentation
dynamic-programming unit 3 power point presentation
Shrinivasa6
 
5617723.pptx
5617723.pptx
MatthewMhlongo
 
dinosourrrrrrrrrrrrrrrrrrrrrr formula .pptx
dinosourrrrrrrrrrrrrrrrrrrrrr formula .pptx
ShohidulIslamSovon
 
Dynamic pgmming
Dynamic pgmming
Dr. C.V. Suresh Babu
 
DynamicProgramming.pptx
DynamicProgramming.pptx
SaimaShaheen14
 
Dynamicpgmming
Dynamicpgmming
Muhammad Wasif
 
Dynamic Programming.pptx
Dynamic Programming.pptx
MuktarHossain13
 
Module 2ppt.pptx divid and conquer method
Module 2ppt.pptx divid and conquer method
JyoReddy9
 
Elements of Dynamic Programming
Elements of Dynamic Programming
Vishwajeet Shabadi
 
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
Simplilearn
 
W8L1 Introduction & Fibonacci Numbers part 1.pptx
W8L1 Introduction & Fibonacci Numbers part 1.pptx
sakibahmed181234
 
Dynamic programing
Dynamic programing
AniketSingh609353
 
Dynamic Programing.pptx good for understanding
Dynamic Programing.pptx good for understanding
HUSNAINAHMAD39
 
algorithm_6dynamic_programming.pdf
algorithm_6dynamic_programming.pdf
HsuChi Chen
 
Dynamic Programming Algorithm CSI-504.pdf
Dynamic Programming Algorithm CSI-504.pdf
dinemma1
 
ADA Unit 2.pptx
ADA Unit 2.pptx
AmanKumar879992
 
Design and Analysis of Algorithm-Lecture.pptx
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
Dynamic programming
Dynamic programming
Nguyễn Anh
 
Dynamic programming
Dynamic programming
Jay Nagar
 
Dynamic programming - fundamentals review
Dynamic programming - fundamentals review
ElifTech
 
Dynamic programming prasintation eaisy
Dynamic programming prasintation eaisy
ahmed51236
 
Chapter 5.pptx
Chapter 5.pptx
Tekle12
 
Dynamic Programming.pptx
Dynamic Programming.pptx
Thanga Ramya S
 
dynamic programming complete by Mumtaz Ali (03154103173)
dynamic programming complete by Mumtaz Ali (03154103173)
Mumtaz Ali
 
Introduction to dynamic programming
Introduction to dynamic programming
Amisha Narsingani
 
LDMMIA GRAD Student Check-in Orientation Sampler
LDMMIA GRAD Student Check-in Orientation Sampler
LDM & Mia eStudios
 
Nice Dream.pdf /
Nice Dream.pdf /
ErinUsher3
 

More Related Content

Similar to Dynamic Programming: Memoization, Introduction to ALgorithms (20)

Dynamicpgmming
Dynamicpgmming
Muhammad Wasif
 
Dynamic Programming.pptx
Dynamic Programming.pptx
MuktarHossain13
 
Module 2ppt.pptx divid and conquer method
Module 2ppt.pptx divid and conquer method
JyoReddy9
 
Elements of Dynamic Programming
Elements of Dynamic Programming
Vishwajeet Shabadi
 
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
Simplilearn
 
W8L1 Introduction & Fibonacci Numbers part 1.pptx
W8L1 Introduction & Fibonacci Numbers part 1.pptx
sakibahmed181234
 
Dynamic programing
Dynamic programing
AniketSingh609353
 
Dynamic Programing.pptx good for understanding
Dynamic Programing.pptx good for understanding
HUSNAINAHMAD39
 
algorithm_6dynamic_programming.pdf
algorithm_6dynamic_programming.pdf
HsuChi Chen
 
Dynamic Programming Algorithm CSI-504.pdf
Dynamic Programming Algorithm CSI-504.pdf
dinemma1
 
ADA Unit 2.pptx
ADA Unit 2.pptx
AmanKumar879992
 
Design and Analysis of Algorithm-Lecture.pptx
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
Dynamic programming
Dynamic programming
Nguyễn Anh
 
Dynamic programming
Dynamic programming
Jay Nagar
 
Dynamic programming - fundamentals review
Dynamic programming - fundamentals review
ElifTech
 
Dynamic programming prasintation eaisy
Dynamic programming prasintation eaisy
ahmed51236
 
Chapter 5.pptx
Chapter 5.pptx
Tekle12
 
Dynamic Programming.pptx
Dynamic Programming.pptx
Thanga Ramya S
 
dynamic programming complete by Mumtaz Ali (03154103173)
dynamic programming complete by Mumtaz Ali (03154103173)
Mumtaz Ali
 
Introduction to dynamic programming
Introduction to dynamic programming
Amisha Narsingani
 
Dynamic Programming.pptx
Dynamic Programming.pptx
MuktarHossain13
 
Module 2ppt.pptx divid and conquer method
Module 2ppt.pptx divid and conquer method
JyoReddy9
 
Elements of Dynamic Programming
Elements of Dynamic Programming
Vishwajeet Shabadi
 
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
Simplilearn
 
W8L1 Introduction & Fibonacci Numbers part 1.pptx
W8L1 Introduction & Fibonacci Numbers part 1.pptx
sakibahmed181234
 
Dynamic Programing.pptx good for understanding
Dynamic Programing.pptx good for understanding
HUSNAINAHMAD39
 
algorithm_6dynamic_programming.pdf
algorithm_6dynamic_programming.pdf
HsuChi Chen
 
Dynamic Programming Algorithm CSI-504.pdf
Dynamic Programming Algorithm CSI-504.pdf
dinemma1
 
Design and Analysis of Algorithm-Lecture.pptx
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
Dynamic programming
Dynamic programming
Nguyễn Anh
 
Dynamic programming
Dynamic programming
Jay Nagar
 
Dynamic programming - fundamentals review
Dynamic programming - fundamentals review
ElifTech
 
Dynamic programming prasintation eaisy
Dynamic programming prasintation eaisy
ahmed51236
 
Chapter 5.pptx
Chapter 5.pptx
Tekle12
 
Dynamic Programming.pptx
Dynamic Programming.pptx
Thanga Ramya S
 
dynamic programming complete by Mumtaz Ali (03154103173)
dynamic programming complete by Mumtaz Ali (03154103173)
Mumtaz Ali
 
Introduction to dynamic programming
Introduction to dynamic programming
Amisha Narsingani
 

Recently uploaded (20)

LDMMIA GRAD Student Check-in Orientation Sampler
LDMMIA GRAD Student Check-in Orientation Sampler
LDM & Mia eStudios
 
Nice Dream.pdf /
Nice Dream.pdf /
ErinUsher3
 
june 10 2025 ppt for madden on art science is over.pptx
june 10 2025 ppt for madden on art science is over.pptx
roger malina
 
Non-Communicable Diseases and National Health Programs – Unit 10 | B.Sc Nursi...
Non-Communicable Diseases and National Health Programs – Unit 10 | B.Sc Nursi...
RAKESH SAJJAN
 
FIRST DAY HIGH orientation for mapeh subject in grade 10.pptx
FIRST DAY HIGH orientation for mapeh subject in grade 10.pptx
GlysdiEelesor1
 
The Man In The Back – Exceptional Delaware.pdf
The Man In The Back – Exceptional Delaware.pdf
dennisongomezk
 
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecd
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecd
razelitouali
 
ROLE PLAY: FIRST AID -CPR & RECOVERY POSITION.pptx
ROLE PLAY: FIRST AID -CPR & RECOVERY POSITION.pptx
Belicia R.S
 
Capitol Doctoral Presentation -June 2025.pptx
Capitol Doctoral Presentation -June 2025.pptx
CapitolTechU
 
Ray Dalio How Countries go Broke the Big Cycle
Ray Dalio How Countries go Broke the Big Cycle
Dadang Solihin
 
How to Create an Event in Odoo 18 - Odoo 18 Slides
How to Create an Event in Odoo 18 - Odoo 18 Slides
Celine George
 
Paper 108 | Thoreau’s Influence on Gandhi: The Evolution of Civil Disobedience
Paper 108 | Thoreau’s Influence on Gandhi: The Evolution of Civil Disobedience
Rajdeep Bavaliya
 
FEBA Sofia Univercity final diplian v3 GSDG 5.2025.pdf
FEBA Sofia Univercity final diplian v3 GSDG 5.2025.pdf
ChristinaFortunova
 
SPENT QUIZ NQL JR FEST 5.0 BY SOURAV.pptx
SPENT QUIZ NQL JR FEST 5.0 BY SOURAV.pptx
Sourav Kr Podder
 
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
RAKESH SAJJAN
 
2025 June Year 9 Presentation: Subject selection.pptx
2025 June Year 9 Presentation: Subject selection.pptx
mansk2
 
How to Manage Inventory Movement in Odoo 18 POS
How to Manage Inventory Movement in Odoo 18 POS
Celine George
 
JHS SHS Back to School 2024-2025 .pptx
JHS SHS Back to School 2024-2025 .pptx
melvinapay78
 
GEOGRAPHY-Study Material [ Class 10th] .pdf
GEOGRAPHY-Study Material [ Class 10th] .pdf
SHERAZ AHMAD LONE
 
Revista digital preescolar en transformación
Revista digital preescolar en transformación
guerragallardo26
 
LDMMIA GRAD Student Check-in Orientation Sampler
LDMMIA GRAD Student Check-in Orientation Sampler
LDM & Mia eStudios
 
Nice Dream.pdf /
Nice Dream.pdf /
ErinUsher3
 
june 10 2025 ppt for madden on art science is over.pptx
june 10 2025 ppt for madden on art science is over.pptx
roger malina
 
Non-Communicable Diseases and National Health Programs – Unit 10 | B.Sc Nursi...
Non-Communicable Diseases and National Health Programs – Unit 10 | B.Sc Nursi...
RAKESH SAJJAN
 
FIRST DAY HIGH orientation for mapeh subject in grade 10.pptx
FIRST DAY HIGH orientation for mapeh subject in grade 10.pptx
GlysdiEelesor1
 
The Man In The Back – Exceptional Delaware.pdf
The Man In The Back – Exceptional Delaware.pdf
dennisongomezk
 
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecd
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecd
razelitouali
 
ROLE PLAY: FIRST AID -CPR & RECOVERY POSITION.pptx
ROLE PLAY: FIRST AID -CPR & RECOVERY POSITION.pptx
Belicia R.S
 
Capitol Doctoral Presentation -June 2025.pptx
Capitol Doctoral Presentation -June 2025.pptx
CapitolTechU
 
Ray Dalio How Countries go Broke the Big Cycle
Ray Dalio How Countries go Broke the Big Cycle
Dadang Solihin
 
How to Create an Event in Odoo 18 - Odoo 18 Slides
How to Create an Event in Odoo 18 - Odoo 18 Slides
Celine George
 
Paper 108 | Thoreau’s Influence on Gandhi: The Evolution of Civil Disobedience
Paper 108 | Thoreau’s Influence on Gandhi: The Evolution of Civil Disobedience
Rajdeep Bavaliya
 
FEBA Sofia Univercity final diplian v3 GSDG 5.2025.pdf
FEBA Sofia Univercity final diplian v3 GSDG 5.2025.pdf
ChristinaFortunova
 
SPENT QUIZ NQL JR FEST 5.0 BY SOURAV.pptx
SPENT QUIZ NQL JR FEST 5.0 BY SOURAV.pptx
Sourav Kr Podder
 
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...
RAKESH SAJJAN
 
2025 June Year 9 Presentation: Subject selection.pptx
2025 June Year 9 Presentation: Subject selection.pptx
mansk2
 
How to Manage Inventory Movement in Odoo 18 POS
How to Manage Inventory Movement in Odoo 18 POS
Celine George
 
JHS SHS Back to School 2024-2025 .pptx
JHS SHS Back to School 2024-2025 .pptx
melvinapay78
 
GEOGRAPHY-Study Material [ Class 10th] .pdf
GEOGRAPHY-Study Material [ Class 10th] .pdf
SHERAZ AHMAD LONE
 
Revista digital preescolar en transformación
Revista digital preescolar en transformación
guerragallardo26
 
Ad

Dynamic Programming: Memoization, Introduction to ALgorithms

  • 1. Introduction CSE 221: Algorithms Dynamic Programming Mumit Khan Fatema Tuz Zohora Computer Science and Engineering BRAC University References 1 Jon Kleinberg and Éva Tardos, Algorithm Design. Pearson Education, 2006. 2 T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, Second Edition. The MIT Press, September 2001. Last modified: November 27, 2012 This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Licensed under CSE 221: Algorithms 1 / 53
  • 2. Introduction Memoization Dynamic programming Weighted interval sched Contents 1 Introduction Memoization Dynamic programming Weighted interval scheduling problem 0/1 Knapsack problem Coin changing problem What problems can be solved by DP? Conclusion Licensed under CSE 221: Algorithms 2 / 53
  • 3. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming (DP) Build up the solution by computing solutions to the subproblems. Licensed under CSE 221: Algorithms 3 / 53
  • 4. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming (DP) Build up the solution by computing solutions to the subproblems. Don’t solve the same subproblem twice, but rather save the solution so it can be re-used later on. Licensed under CSE 221: Algorithms 3 / 53
  • 5. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming (DP) Build up the solution by computing solutions to the subproblems. Don’t solve the same subproblem twice, but rather save the solution so it can be re-used later on. Often used for a large class to optimization problems. Licensed under CSE 221: Algorithms 3 / 53
  • 6. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming (DP) Build up the solution by computing solutions to the subproblems. Don’t solve the same subproblem twice, but rather save the solution so it can be re-used later on. Often used for a large class to optimization problems. Unlike Greedy algorithms, implicitly solve all subproblems. Licensed under CSE 221: Algorithms 3 / 53
  • 7. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming (DP) Build up the solution by computing solutions to the subproblems. Don’t solve the same subproblem twice, but rather save the solution so it can be re-used later on. Often used for a large class to optimization problems. Unlike Greedy algorithms, implicitly solve all subproblems. Motivating the case for DP with Memoization – a top-down technique, and then moving on to Dynamic Programming – a bottom-up technique. Licensed under CSE 221: Algorithms 3 / 53
  • 8. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming (DP) Build up the solution by computing solutions to the subproblems. Don’t solve the same subproblem twice, but rather save the solution so it can be re-used later on. Often used for a large class to optimization problems. Unlike Greedy algorithms, implicitly solve all subproblems. Motivating the case for DP with Memoization – a top-down technique, and then moving on to Dynamic Programming – a bottom-up technique. Greedy is evil, Dynamic Programming is good. – Prof. Jeff Erickson, University of Illinois, Urbana-Champaign. Licensed under CSE 221: Algorithms 3 / 53
  • 9. Introduction Memoization Dynamic programming Weighted interval sched Contents 1 Introduction Memoization Dynamic programming Weighted interval scheduling problem 0/1 Knapsack problem Coin changing problem What problems can be solved by DP? Conclusion Licensed under CSE 221: Algorithms 4 / 53
  • 10. Introduction Memoization Dynamic programming Weighted interval sched Recursive solution to Fibonacci numbers Definition (Fibonacci numbers) The Fibonacci numbers are given by the following sequence: h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i Licensed under CSE 221: Algorithms 5 / 53
  • 11. Introduction Memoization Dynamic programming Weighted interval sched Recursive solution to Fibonacci numbers Definition (Fibonacci numbers) The Fibonacci numbers are given by the following sequence: h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i and described by the following recurrence. Fib(n) = ( n if n = 0 or 1 Fib(n − 1) + Fib(n − 2) if n ≥ 2 Licensed under CSE 221: Algorithms 5 / 53
  • 12. Introduction Memoization Dynamic programming Weighted interval sched Recursive solution to Fibonacci numbers Definition (Fibonacci numbers) The Fibonacci numbers are given by the following sequence: h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i and described by the following recurrence. Fib(n) = ( n if n = 0 or 1 Fib(n − 1) + Fib(n − 2) if n ≥ 2 Straightforward recursive algorithm Fibonacci(n) n ≥ 0 1 if n = 0 or n = 1 2 then return n 3 else return fibonacci(n − 1) + fibonacci(n − 2) Licensed under CSE 221: Algorithms 5 / 53
  • 13. Introduction Memoization Dynamic programming Weighted interval sched Recursion tree Licensed under CSE 221: Algorithms 6 / 53
  • 14. Introduction Memoization Dynamic programming Weighted interval sched Recursion tree Licensed under CSE 221: Algorithms 6 / 53
  • 15. Introduction Memoization Dynamic programming Weighted interval sched Recursion tree Licensed under CSE 221: Algorithms 6 / 53
  • 16. Introduction Memoization Dynamic programming Weighted interval sched Recursion tree Licensed under CSE 221: Algorithms 6 / 53
  • 17. Introduction Memoization Dynamic programming Weighted interval sched Recursion tree Complexity This recursive algorithm for Fibonacci numbers has exponential running time! Licensed under CSE 221: Algorithms 6 / 53
  • 18. Introduction Memoization Dynamic programming Weighted interval sched Recursion tree Complexity This recursive algorithm for Fibonacci numbers has exponential running time! To be precise, T(n) = O(ϕn) , where ϕ = 1+ √ 5 2 is the golden ratio. Licensed under CSE 221: Algorithms 6 / 53
  • 19. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations Note how fib(n − 2) and fib(n − 3) are each being computed twice. Licensed under CSE 221: Algorithms 7 / 53
  • 20. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations In fact, computing fib(n − 2) involves computing a whole subtree. Licensed under CSE 221: Algorithms 7 / 53
  • 21. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations Likewise for computing fib(n − 3). Licensed under CSE 221: Algorithms 7 / 53
  • 22. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations Observations Spectacular redundancy in computation Licensed under CSE 221: Algorithms 7 / 53
  • 23. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations Observations Spectacular redundancy in computation – how many times are we computing fib(n − 2)? Licensed under CSE 221: Algorithms 7 / 53
  • 24. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations Observations Spectacular redundancy in computation – how many times are we computing fib(n − 2)? fib(n − 3)? Licensed under CSE 221: Algorithms 7 / 53
  • 25. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations Observations Spectacular redundancy in computation – how many times are we computing fib(n − 2)? fib(n − 3)? What if we compute and save the result of fib(i) for i = {2, 3, . . , n} the first time, and then re-use it each time afterward? Licensed under CSE 221: Algorithms 7 / 53
  • 26. Introduction Memoization Dynamic programming Weighted interval sched Redundant computations Observations Spectacular redundancy in computation – how many times are we computing fib(n − 2)? fib(n − 3)? What if we compute and save the result of fib(i) for i = {2, 3, . . , n} the first time, and then re-use it each time afterward? Ah, we’ve just (re)discovered Memo(r)ization! Licensed under CSE 221: Algorithms 7 / 53
  • 27. Introduction Memoization Dynamic programming Weighted interval sched Memoization Definition (Memoization) The process of saving solutions to subproblems that can be re-used later without redundant computations. Licensed under CSE 221: Algorithms 8 / 53
  • 28. Introduction Memoization Dynamic programming Weighted interval sched Memoization Definition (Memoization) The process of saving solutions to subproblems that can be re-used later without redundant computations. Basic idea Typically, the solutions to subproblems (i.e., the intermediate solutions) are saved in a global array, which are later looked up and re-used as needed. Licensed under CSE 221: Algorithms 8 / 53
  • 29. Introduction Memoization Dynamic programming Weighted interval sched Memoization Definition (Memoization) The process of saving solutions to subproblems that can be re-used later without redundant computations. Basic idea Typically, the solutions to subproblems (i.e., the intermediate solutions) are saved in a global array, which are later looked up and re-used as needed. 1 At each step of computation, first see if the solution to the subproblem has already been found and saved. Licensed under CSE 221: Algorithms 8 / 53
  • 30. Introduction Memoization Dynamic programming Weighted interval sched Memoization Definition (Memoization) The process of saving solutions to subproblems that can be re-used later without redundant computations. Basic idea Typically, the solutions to subproblems (i.e., the intermediate solutions) are saved in a global array, which are later looked up and re-used as needed. 1 At each step of computation, first see if the solution to the subproblem has already been found and saved. 2 If so, simply return the solution. Licensed under CSE 221: Algorithms 8 / 53
  • 31. Introduction Memoization Dynamic programming Weighted interval sched Memoization Definition (Memoization) The process of saving solutions to subproblems that can be re-used later without redundant computations. Basic idea Typically, the solutions to subproblems (i.e., the intermediate solutions) are saved in a global array, which are later looked up and re-used as needed. 1 At each step of computation, first see if the solution to the subproblem has already been found and saved. 2 If so, simply return the solution. 3 If not, compute the solution, and save it before returning the solution. Licensed under CSE 221: Algorithms 8 / 53
  • 32. Introduction Memoization Dynamic programming Weighted interval sched Memoized recursive algorithm for Fibonacci numbers M-Fibonacci(n) n ≥ 0, global F = [0 . . n] 1 if n = 0 or n = 1 2 then return n Our base conditions. 3 if F[n] is empty No saved solution found for n. 4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2) 5 return F[n] Licensed under CSE 221: Algorithms 9 / 53
  • 33. Introduction Memoization Dynamic programming Weighted interval sched Memoized recursive algorithm for Fibonacci numbers M-Fibonacci(n) n ≥ 0, global F = [0 . . n] 1 if n = 0 or n = 1 2 then return n Our base conditions. 3 if F[n] is empty No saved solution found for n. 4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2) 5 return F[n] Questions What is this global array F? Licensed under CSE 221: Algorithms 9 / 53
  • 34. Introduction Memoization Dynamic programming Weighted interval sched Memoized recursive algorithm for Fibonacci numbers M-Fibonacci(n) n ≥ 0, global F = [0 . . n] 1 if n = 0 or n = 1 2 then return n Our base conditions. 3 if F[n] is empty No saved solution found for n. 4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2) 5 return F[n] Questions What is this global array F? It’s used store the values of the intermediate results, and must be initialized by the caller to all empty. Licensed under CSE 221: Algorithms 9 / 53
  • 35. Introduction Memoization Dynamic programming Weighted interval sched Memoized recursive algorithm for Fibonacci numbers M-Fibonacci(n) n ≥ 0, global F = [0 . . n] 1 if n = 0 or n = 1 2 then return n Our base conditions. 3 if F[n] is empty No saved solution found for n. 4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2) 5 return F[n] Questions What is this global array F? It’s used store the values of the intermediate results, and must be initialized by the caller to all empty. What is an appropriate sentinel to indicate that F[i], 0 ≤ i ≤ n has not been solved yet (i.e., empty)? Licensed under CSE 221: Algorithms 9 / 53
  • 36. Introduction Memoization Dynamic programming Weighted interval sched Memoized recursive algorithm for Fibonacci numbers M-Fibonacci(n) n ≥ 0, global F = [0 . . n] 1 if n = 0 or n = 1 2 then return n Our base conditions. 3 if F[n] is empty No saved solution found for n. 4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2) 5 return F[n] Questions What is this global array F? It’s used store the values of the intermediate results, and must be initialized by the caller to all empty. What is an appropriate sentinel to indicate that F[i], 0 ≤ i ≤ n has not been solved yet (i.e., empty)? Use −1, which is guaranteed to be an invalid value. Licensed under CSE 221: Algorithms 9 / 53
  • 37. Introduction Memoization Dynamic programming Weighted interval sched Memoized . . . Fibonacci numbers (continued) Fibonacci(n) n ≥ 0 Allocate an array F[0 . . n] to save results (length[F] = n + 1). 1 for i ← 0 to n 2 do F[i] ← −1 No solution computed for i yet (sentinel) 3 return m-fibonacci(F, n) Licensed under CSE 221: Algorithms 10 / 53
  • 38. Introduction Memoization Dynamic programming Weighted interval sched Memoized . . . Fibonacci numbers (continued) Fibonacci(n) n ≥ 0 Allocate an array F[0 . . n] to save results (length[F] = n + 1). 1 for i ← 0 to n 2 do F[i] ← −1 No solution computed for i yet (sentinel) 3 return m-fibonacci(F, n) M-Fibonacci(F, n) n ≥ 0, F = [0 . . n] 1 if n ≤ 1 2 then return n 3 if F[n] = −1 No saved solution found for n. 4 then F[n] ← m-fibonacci(F, n − 1) + m-fibonacci(F, n − 2) 5 return F[n] Licensed under CSE 221: Algorithms 10 / 53
  • 39. Introduction Memoization Dynamic programming Weighted interval sched Memoized . . . Fibonacci numbers (continued) Fibonacci(n) n ≥ 0 Allocate an array F[0 . . n] to save results (length[F] = n + 1). 1 for i ← 0 to n 2 do F[i] ← −1 No solution computed for i yet (sentinel) 3 return m-fibonacci(F, n) M-Fibonacci(F, n) n ≥ 0, F = [0 . . n] 1 if n ≤ 1 2 then return n 3 if F[n] = −1 No saved solution found for n. 4 then F[n] ← m-fibonacci(F, n − 1) + m-fibonacci(F, n − 2) 5 return F[n] Running time Each element F[2] . . . F[n] is filled in just once in Θ(1) time, so T(n) = Θ(n) . Licensed under CSE 221: Algorithms 10 / 53
  • 40. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Licensed under CSE 221: Algorithms 11 / 53
  • 41. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Any recursive algorithm can be memoized, but only helps if there is redundancy in computing solutions to subproblems (in other words, if there are overlapping subproblems). Licensed under CSE 221: Algorithms 11 / 53
  • 42. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Any recursive algorithm can be memoized, but only helps if there is redundancy in computing solutions to subproblems (in other words, if there are overlapping subproblems). Any recursive algorithm where redundant solutions are computed, Memoization is an appropriate solution. Licensed under CSE 221: Algorithms 11 / 53
  • 43. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Any recursive algorithm can be memoized, but only helps if there is redundancy in computing solutions to subproblems (in other words, if there are overlapping subproblems). Any recursive algorithm where redundant solutions are computed, Memoization is an appropriate solution. Often called Top-down Dynamic Programming. Licensed under CSE 221: Algorithms 11 / 53
  • 44. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Any recursive algorithm can be memoized, but only helps if there is redundancy in computing solutions to subproblems (in other words, if there are overlapping subproblems). Any recursive algorithm where redundant solutions are computed, Memoization is an appropriate solution. Often called Top-down Dynamic Programming. Questions to ask (and remember) Licensed under CSE 221: Algorithms 11 / 53
  • 45. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Any recursive algorithm can be memoized, but only helps if there is redundancy in computing solutions to subproblems (in other words, if there are overlapping subproblems). Any recursive algorithm where redundant solutions are computed, Memoization is an appropriate solution. Often called Top-down Dynamic Programming. Questions to ask (and remember) What are the drawbacks, if any, of memoization? Licensed under CSE 221: Algorithms 11 / 53
  • 46. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Any recursive algorithm can be memoized, but only helps if there is redundancy in computing solutions to subproblems (in other words, if there are overlapping subproblems). Any recursive algorithm where redundant solutions are computed, Memoization is an appropriate solution. Often called Top-down Dynamic Programming. Questions to ask (and remember) What are the drawbacks, if any, of memoization? Would all recursive algorithms benefit from memoization? Licensed under CSE 221: Algorithms 11 / 53
  • 47. Introduction Memoization Dynamic programming Weighted interval sched Memoization highlights Idea is to re-use saved solutions, trading off space for time. Any recursive algorithm can be memoized, but only helps if there is redundancy in computing solutions to subproblems (in other words, if there are overlapping subproblems). Any recursive algorithm where redundant solutions are computed, Memoization is an appropriate solution. Often called Top-down Dynamic Programming. Questions to ask (and remember) What are the drawbacks, if any, of memoization? Would all recursive algorithms benefit from memoization? For example, would the recursive algorithm to compute the factorial of a number benefit from memoization? Licensed under CSE 221: Algorithms 11 / 53
  • 48. Introduction Memoization Dynamic programming Weighted interval sched Contents 1 Introduction Memoization Dynamic programming Weighted interval scheduling problem 0/1 Knapsack problem Coin changing problem What problems can be solved by DP? Conclusion Licensed under CSE 221: Algorithms 12 / 53
  • 49. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming Note how the recursive algorithm computes the Fibonacci number n top down by computing (and saving) solutions for smaller values. Licensed under CSE 221: Algorithms 13 / 53
  • 50. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming Note how the recursive algorithm computes the Fibonacci number n top down by computing (and saving) solutions for smaller values. Idea: why not build up the solution bottom-up, starting from the base case(s) all the way to n? Licensed under CSE 221: Algorithms 13 / 53
  • 51. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming Note how the recursive algorithm computes the Fibonacci number n top down by computing (and saving) solutions for smaller values. Idea: why not build up the solution bottom-up, starting from the base case(s) all the way to n? This bottom up construction gives us the first Dynamic Programming algorithm. Licensed under CSE 221: Algorithms 13 / 53
  • 52. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming Note how the recursive algorithm computes the Fibonacci number n top down by computing (and saving) solutions for smaller values. Idea: why not build up the solution bottom-up, starting from the base case(s) all the way to n? This bottom up construction gives us the first Dynamic Programming algorithm. Dynamic programming algorithm for fibonacci numbers Fibonacci(n) n ≥ 0 1 F[0] ← 0 2 F[1] ← 1 3 for i ← 2 to n 4 do F[i] ← F[i − 1] + F[i − 2] 5 return F[n] Licensed under CSE 221: Algorithms 13 / 53
  • 53. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming Note how the recursive algorithm computes the Fibonacci number n top down by computing (and saving) solutions for smaller values. Idea: why not build up the solution bottom-up, starting from the base case(s) all the way to n? This bottom up construction gives us the first Dynamic Programming algorithm. Dynamic programming algorithm for fibonacci numbers Fibonacci(n) n ≥ 0 1 F[0] ← 0 2 F[1] ← 1 3 for i ← 2 to n 4 do F[i] ← F[i − 1] + F[i − 2] 5 return F[n] T(n) = Θ(n) Licensed under CSE 221: Algorithms 13 / 53
  • 54. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming (continued) The pattern 1 Formulate the problem recursively. Licensed under CSE 221: Algorithms 14 / 53
  • 55. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming (continued) The pattern 1 Formulate the problem recursively. Write a formula for the whole problem as a simple combination of of the answers to smaller subproblems. Licensed under CSE 221: Algorithms 14 / 53
  • 56. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming (continued) The pattern 1 Formulate the problem recursively. Write a formula for the whole problem as a simple combination of of the answers to smaller subproblems. 2 Build solutions to the recurrence from the bottom up. Licensed under CSE 221: Algorithms 14 / 53
  • 57. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming (continued) The pattern 1 Formulate the problem recursively. Write a formula for the whole problem as a simple combination of of the answers to smaller subproblems. 2 Build solutions to the recurrence from the bottom up. Write an algorithm that starts with the base case, and works its way up to the final solution by considering the subproblems in the correct order. Licensed under CSE 221: Algorithms 14 / 53
  • 58. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming (continued) The pattern 1 Formulate the problem recursively. Write a formula for the whole problem as a simple combination of of the answers to smaller subproblems. 2 Build solutions to the recurrence from the bottom up. Write an algorithm that starts with the base case, and works its way up to the final solution by considering the subproblems in the correct order. Observations 1 Must ensure that the recurrence is correct of course! Licensed under CSE 221: Algorithms 14 / 53
  • 59. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming (continued) The pattern 1 Formulate the problem recursively. Write a formula for the whole problem as a simple combination of of the answers to smaller subproblems. 2 Build solutions to the recurrence from the bottom up. Write an algorithm that starts with the base case, and works its way up to the final solution by considering the subproblems in the correct order. Observations 1 Must ensure that the recurrence is correct of course! 2 Need a “place” to store the solutions to subproblems, and need to look these solutions up when needed. Licensed under CSE 221: Algorithms 14 / 53
  • 60. Introduction Memoization Dynamic programming Weighted interval sched Dynamic programming (continued) The pattern 1 Formulate the problem recursively. Write a formula for the whole problem as a simple combination of of the answers to smaller subproblems. 2 Build solutions to the recurrence from the bottom up. Write an algorithm that starts with the base case, and works its way up to the final solution by considering the subproblems in the correct order. Observations 1 Must ensure that the recurrence is correct of course! 2 Need a “place” to store the solutions to subproblems, and need to look these solutions up when needed. Typically, but not always, a multi-dimensional table is used as storage. Licensed under CSE 221: Algorithms 14 / 53
  • 61. Introduction Memoization Dynamic programming Weighted interval sched Contents 1 Introduction Memoization Dynamic programming Weighted interval scheduling problem 0/1 Knapsack problem Coin changing problem What problems can be solved by DP? Conclusion Licensed under CSE 221: Algorithms 15 / 53
  • 62. Introduction Memoization Dynamic programming Weighted interval sched Weighted interval scheduling problem Definition (Weighted interval scheduling problem) Given a set of schedules I = {Ii }, with associated weights W = {wi }, find A ⊆ I such that the members of A are non-conflicting and the total weight P i∈A wi is maximized. Example (an instance of weighted interval problem) |A| =???, P i∈A wi =???. Licensed under CSE 221: Algorithms 16 / 53
  • 63. Introduction Memoization Dynamic programming Weighted interval sched Weighted interval scheduling problem Definition (Weighted interval scheduling problem) Given a set of schedules I = {Ii }, with associated weights W = {wi }, find A ⊆ I such that the members of A are non-conflicting and the total weight P i∈A wi is maximized. Example (using an optimal strategy) |A| = 1, P i∈A wi = 3. Licensed under CSE 221: Algorithms 16 / 53
  • 64. Introduction Memoization Dynamic programming Weighted interval sched Weighted interval scheduling problem Definition (Weighted interval scheduling problem) Given a set of schedules I = {Ii }, with associated weights W = {wi }, find A ⊆ I such that the members of A are non-conflicting and the total weight P i∈A wi is maximized. Example (using an optimal strategy) |A| = 1, P i∈A wi = 3. What now? First step is to formulate a recursive solution, but first we need to figure out what the subproblems are. Licensed under CSE 221: Algorithms 16 / 53
  • 65. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let W be an instance of a weighted interval problem. Licensed under CSE 221: Algorithms 17 / 53
  • 66. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let W be an instance of a weighted interval problem. As in the greedy approach, we sort the intervals according to finish times such that fi ≤ fj for i j (“a natural order of the subproblems”). Licensed under CSE 221: Algorithms 17 / 53
  • 67. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let W be an instance of a weighted interval problem. As in the greedy approach, we sort the intervals according to finish times such that fi ≤ fj for i j (“a natural order of the subproblems”). Let ϑ be an optimal solution (even if we have no idea what it is yet). Licensed under CSE 221: Algorithms 17 / 53
  • 68. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let W be an instance of a weighted interval problem. As in the greedy approach, we sort the intervals according to finish times such that fi ≤ fj for i j (“a natural order of the subproblems”). Let ϑ be an optimal solution (even if we have no idea what it is yet). All we can say about ϑ is the following: interval n (the last interval) either belongs to ϑ, or it doesn’t. Licensed under CSE 221: Algorithms 17 / 53
  • 69. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let W be an instance of a weighted interval problem. As in the greedy approach, we sort the intervals according to finish times such that fi ≤ fj for i j (“a natural order of the subproblems”). Let ϑ be an optimal solution (even if we have no idea what it is yet). All we can say about ϑ is the following: interval n (the last interval) either belongs to ϑ, or it doesn’t. If n ∈ ϑ Then clearly all intervals that conflict with n are not members of ϑ. ϑ then contains n, plus an optimal solution to all intervals that do not conflict with n. We now need to have a quick way of computing list of conflicting intervals for n. Licensed under CSE 221: Algorithms 17 / 53
  • 70. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let W be an instance of a weighted interval problem. As in the greedy approach, we sort the intervals according to finish times such that fi ≤ fj for i j (“a natural order of the subproblems”). Let ϑ be an optimal solution (even if we have no idea what it is yet). All we can say about ϑ is the following: interval n (the last interval) either belongs to ϑ, or it doesn’t. If n ∈ ϑ Then clearly all intervals that conflict with n are not members of ϑ. ϑ then contains n, plus an optimal solution to all intervals that do not conflict with n. We now need to have a quick way of computing list of conflicting intervals for n. If n / ∈ ϑ Then ϑ contains an optimal solution for the intervals {i1, i2, . . , in−1}. Licensed under CSE 221: Algorithms 17 / 53
  • 71. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Example (an instance of a weighted interval problem) For each interval i, compute p(i), the rightmost interval among the non-conflicting preceding intervals of i. Define p(j) = 0 if no request i j is disjoint from j. Licensed under CSE 221: Algorithms 18 / 53
  • 72. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Example (an instance of a weighted interval problem) For a given interval i, p(i) means that intervals {p(i) + 1, p(i) + 2, . . . , i − 1} overlap with it. For example, p(6) = 3, which means that intervals {4, 5} overlap interval 6. Licensed under CSE 221: Algorithms 18 / 53
  • 73. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Example (an instance of a weighted interval problem) Alternatively, intervals {1, 2, . . , p(i)} do not overlap interval i. For example, p(6) = 3 means that intervals {1, 2, 3} do not overlap interval 6. Licensed under CSE 221: Algorithms 18 / 53
  • 74. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If n ∈ ϑ, then ϑ must include, in addition to interval n, an optimal solution to the subproblem consisting of intervals {1, 2, . . . , p(n)}. Licensed under CSE 221: Algorithms 19 / 53
  • 75. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If n ∈ ϑ, then ϑ must include, in addition to interval n, an optimal solution to the subproblem consisting of intervals {1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the subproblem for intervals {1, 2, . . . , n}, then: ϑ(n) = wn + ϑ(p(n)) Licensed under CSE 221: Algorithms 19 / 53
  • 76. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If n ∈ ϑ, then ϑ must include, in addition to interval n, an optimal solution to the subproblem consisting of intervals {1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the subproblem for intervals {1, 2, . . . , n}, then: ϑ(n) = wn + ϑ(p(n)) If n / ∈ ϑ, then ϑ simply contains an optimal solution to the subproblem consisting of the intervals {1, 2, . . . , n − 1}. Licensed under CSE 221: Algorithms 19 / 53
  • 77. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If n ∈ ϑ, then ϑ must include, in addition to interval n, an optimal solution to the subproblem consisting of intervals {1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the subproblem for intervals {1, 2, . . . , n}, then: ϑ(n) = wn + ϑ(p(n)) If n / ∈ ϑ, then ϑ simply contains an optimal solution to the subproblem consisting of the intervals {1, 2, . . . , n − 1}. ϑ(n) = ϑ(n − 1) Licensed under CSE 221: Algorithms 19 / 53
  • 78. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If n ∈ ϑ, then ϑ must include, in addition to interval n, an optimal solution to the subproblem consisting of intervals {1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the subproblem for intervals {1, 2, . . . , n}, then: ϑ(n) = wn + ϑ(p(n)) If n / ∈ ϑ, then ϑ simply contains an optimal solution to the subproblem consisting of the intervals {1, 2, . . . , n − 1}. ϑ(n) = ϑ(n − 1) Since an optimal solution must maximize the sum of the weights in the intervals it contains, we accept the larger of the two. Licensed under CSE 221: Algorithms 19 / 53
  • 79. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If n ∈ ϑ, then ϑ must include, in addition to interval n, an optimal solution to the subproblem consisting of intervals {1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the subproblem for intervals {1, 2, . . . , n}, then: ϑ(n) = wn + ϑ(p(n)) If n / ∈ ϑ, then ϑ simply contains an optimal solution to the subproblem consisting of the intervals {1, 2, . . . , n − 1}. ϑ(n) = ϑ(n − 1) Since an optimal solution must maximize the sum of the weights in the intervals it contains, we accept the larger of the two. ϑ(n) = max(wn + ϑ(p(n)), ϑ(n − 1)) Licensed under CSE 221: Algorithms 19 / 53
  • 80. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Recursive algorithm for an optimal value If OPT(j) is an optimal solution to the subproblem for intervals {1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then: OPT(j) = max(wj + OPT(p(j)), OPT(j − 1)) Licensed under CSE 221: Algorithms 20 / 53
  • 81. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Recursive algorithm for an optimal value If OPT(j) is an optimal solution to the subproblem for intervals {1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then: OPT(j) = max(wj + OPT(p(j)), OPT(j − 1)) Extracting the intervals in an optimal solution The interval j is in an optimal solution OPT(j) if and only if the first of the two options is larger than the second. Licensed under CSE 221: Algorithms 20 / 53
  • 82. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Recursive algorithm for an optimal value If OPT(j) is an optimal solution to the subproblem for intervals {1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then: OPT(j) = max(wj + OPT(p(j)), OPT(j − 1)) Extracting the intervals in an optimal solution The interval j is in an optimal solution OPT(j) if and only if the first of the two options is larger than the second. Interval j belongs to an optimal solution on the set {1, 2, . . . , j} if and only if wj + OPT(p(j)) ≥ OPT(j − 1) Licensed under CSE 221: Algorithms 20 / 53
  • 83. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm WIS(j) 1 if j = 0 2 then return 0 3 else return max(wj + WIS(p(j)), WIS(j − 1)) Licensed under CSE 221: Algorithms 21 / 53
  • 84. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm WIS(j) 1 if j = 0 2 then return 0 3 else return max(wj + WIS(p(j)), WIS(j − 1)) The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in non-decreasing order of the finishing times. Licensed under CSE 221: Algorithms 21 / 53
  • 85. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm WIS(j) 1 if j = 0 2 then return 0 3 else return max(wj + WIS(p(j)), WIS(j − 1)) The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in non-decreasing order of the finishing times. The tree grows very rapidly, leading to exponential running time. The tree when p(j) = j − 2 for all j shows how quickly it grows. Licensed under CSE 221: Algorithms 21 / 53
  • 86. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm WIS(j) 1 if j = 0 2 then return 0 3 else return max(wj + WIS(p(j)), WIS(j − 1)) The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in non-decreasing order of the finishing times. The tree grows very rapidly, leading to exponential running time. The tree when p(j) = j − 2 for all j shows how quickly it grows. There are many overlapping subproblems, so the obvious choice is to memoize the recursion. Licensed under CSE 221: Algorithms 21 / 53
  • 87. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-WIS(j) 1 if j = 0 2 then return 0 3 elseif M[j] is empty 4 then M[j] ← max(wj + M-WIS(p(j)), M-WIS(j − 1)) 5 return M[j] Licensed under CSE 221: Algorithms 22 / 53
  • 88. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-WIS(j) 1 if j = 0 2 then return 0 3 elseif M[j] is empty 4 then M[j] ← max(wj + M-WIS(p(j)), M-WIS(j − 1)) 5 return M[j] Each entry in M[j] gets filled in only once at Θ(1) time, and there are n + 1 entries, so M-WIS(n) takes Θ(n) time. Licensed under CSE 221: Algorithms 22 / 53
  • 89. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-WIS(j) 1 if j = 0 2 then return 0 3 elseif M[j] is empty 4 then M[j] ← max(wj + M-WIS(p(j)), M-WIS(j − 1)) 5 return M[j] Each entry in M[j] gets filled in only once at Θ(1) time, and there are n + 1 entries, so M-WIS(n) takes Θ(n) time. Of course, sorting the intervals by the finish times takes Θ(n lg n) time. Licensed under CSE 221: Algorithms 22 / 53
  • 90. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-WIS(j) 1 if j = 0 2 then return 0 3 elseif M[j] is empty 4 then M[j] ← max(wj + M-WIS(p(j)), M-WIS(j − 1)) 5 return M[j] Each entry in M[j] gets filled in only once at Θ(1) time, and there are n + 1 entries, so M-WIS(n) takes Θ(n) time. Of course, sorting the intervals by the finish times takes Θ(n lg n) time. This memoized algorithm plus sorting the intervals takes Θ(n lg n) + Θ(n) = Θ(n lg n) time. Licensed under CSE 221: Algorithms 22 / 53
  • 91. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values The memoized algorithm only computes the optimal value, but does not extract the intervals that make up the solution. The key to extracting the solution is to note that item j is in ϑ if and only if wj + M[p(j)] ≥ M[j − 1]. This provides two ways of extracting the intervals in the optimal solution: 1 Trace back from M[n] and extract the solution by checking which choice was made – j − 1 or p(j) – when M[j] was included in the optimal set of intervals. 2 Whenever a choice is made between two options, save in pred[j], the predecessor pointer, the choice that was made between j − 1 and p(j). Licensed under CSE 221: Algorithms 23 / 53
  • 92. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values (continued) The first way recursively extracts an optimal set of intervals for a problem size of 1 ≤ j ≤ n. Calling WIS-find-solution(n) extracts all the intervals in the optimal solution. Licensed under CSE 221: Algorithms 24 / 53
  • 93. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values (continued) The first way recursively extracts an optimal set of intervals for a problem size of 1 ≤ j ≤ n. Calling WIS-find-solution(n) extracts all the intervals in the optimal solution. WIS-find-solution(j) 1 if j = 0 2 then Output nothing 3 else 4 if wj + M[p(j)] ≥ M[j − 1] 5 then Output j 6 WIS-find-solution(p(j)) 7 else WIS-find-solution(j − 1) Licensed under CSE 221: Algorithms 24 / 53
  • 94. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values (continued) The second way requires that M-WIS use an auxiliary array pred[0 . . n] to save the predecessor of each interval in the solution. Initialize pred[j] = 0 for all 0 ≤ j ≤ n. Licensed under CSE 221: Algorithms 25 / 53
  • 95. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values (continued) The second way requires that M-WIS use an auxiliary array pred[0 . . n] to save the predecessor of each interval in the solution. Initialize pred[j] = 0 for all 0 ≤ j ≤ n. M-WIS(j) 1 if j = 0 2 then return 0 3 elseif M[j] is empty 4 then if wj + M-WIS(p(j)) M-WIS(j − 1)) 5 then M[j] ← wj + M-WIS(p(j) 6 pred[j] ← p(j) 7 else M[j] ← M-WIS(j − 1) 8 pred[j] ← j − 1 9 return M[j] Licensed under CSE 221: Algorithms 25 / 53
  • 96. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values (continued) Now that we have pred[j] filled in, we start from M[n] and work backwards. 1 If pred[j] = p(j), then we did add the jth interval in the final solution, and we continue with pred[j] ← p(j). 2 if pred[j] 6= p(j), then we did not add the jth interval in the final solution, and we continue with pred[j] ← j − 1. Licensed under CSE 221: Algorithms 26 / 53
  • 97. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values (continued) Now that we have pred[j] filled in, we start from M[n] and work backwards. 1 If pred[j] = p(j), then we did add the jth interval in the final solution, and we continue with pred[j] ← p(j). 2 if pred[j] 6= p(j), then we did not add the jth interval in the final solution, and we continue with pred[j] ← j − 1. WIS-find-solution(j) 1 if j = 0 2 then Output nothing 3 else 4 if pred[j] = p(j) 5 then Output j 6 WIS-find-solution(p(j)) 7 else WIS-find-solution(j − 1) Licensed under CSE 221: Algorithms 26 / 53
  • 98. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values (continued) Now that we have pred[j] filled in, we start from M[n] and work backwards. 1 If pred[j] = p(j), then we did add the jth interval in the final solution, and we continue with pred[j] ← p(j). 2 if pred[j] 6= p(j), then we did not add the jth interval in the final solution, and we continue with pred[j] ← j − 1. WIS-find-solution(j) 1 if j = 0 2 then Output nothing 3 else 4 if pred[j] = p(j) 5 then Output j 6 WIS-find-solution(p(j)) 7 else WIS-find-solution(j − 1) Can you come up with an iterative version? Licensed under CSE 221: Algorithms 26 / 53
  • 99. Introduction Memoization Dynamic programming Weighted interval sched Developing a Dynamic Programming algorithm The value of an optimal solution OPT(j) for any j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and OPT(j − 1). Licensed under CSE 221: Algorithms 27 / 53
  • 100. Introduction Memoization Dynamic programming Weighted interval sched Developing a Dynamic Programming algorithm The value of an optimal solution OPT(j) for any j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and OPT(j − 1). We can build the table M[j] bottom-up, starting from the base case of j = 0, up to n by using the memoized recursive formulation: M[j] = max(wj + M[p(j)], M[j − 1]). Licensed under CSE 221: Algorithms 27 / 53
  • 101. Introduction Memoization Dynamic programming Weighted interval sched Developing a Dynamic Programming algorithm The value of an optimal solution OPT(j) for any j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and OPT(j − 1). We can build the table M[j] bottom-up, starting from the base case of j = 0, up to n by using the memoized recursive formulation: M[j] = max(wj + M[p(j)], M[j − 1]). Dynamic programming algorithm WIS(n) 1 M[0] ← 0 2 for j ← 1 to n 3 do M[j] = max(wj + M[p(j)], M[j − 1]) 4 return M[n] Licensed under CSE 221: Algorithms 27 / 53
  • 102. Introduction Memoization Dynamic programming Weighted interval sched Developing a Dynamic Programming algorithm The value of an optimal solution OPT(j) for any j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and OPT(j − 1). We can build the table M[j] bottom-up, starting from the base case of j = 0, up to n by using the memoized recursive formulation: M[j] = max(wj + M[p(j)], M[j − 1]). Dynamic programming algorithm WIS(n) 1 M[0] ← 0 2 for j ← 1 to n 3 do M[j] = max(wj + M[p(j)], M[j − 1]) 4 return M[n] T(n) = Θ(n) Licensed under CSE 221: Algorithms 27 / 53
  • 103. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values WIS(n) 1 M[0] ← 0 2 for j ← 1 to n 3 do if wj + M[p(j)] M[j − 1] 4 then M[j] = wj + M[p(j)] 5 pred[j] = p(j) 6 else M[j] = M[j − 1] 7 pred[j] = j − 1 8 return M[n] Licensed under CSE 221: Algorithms 28 / 53
  • 104. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values WIS(n) 1 M[0] ← 0 2 for j ← 1 to n 3 do if wj + M[p(j)] M[j − 1] 4 then M[j] = wj + M[p(j)] 5 pred[j] = p(j) 6 else M[j] = M[j − 1] 7 pred[j] = j − 1 8 return M[n] WIS-find-solution(j) 1 j ← n 2 while j 0 3 do if pred[j] = p(j) 4 then Output j 5 j ← pred[j] Licensed under CSE 221: Algorithms 28 / 53
  • 105. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Licensed under CSE 221: Algorithms 29 / 53
  • 106. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Licensed under CSE 221: Algorithms 29 / 53
  • 107. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Licensed under CSE 221: Algorithms 29 / 53
  • 108. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Licensed under CSE 221: Algorithms 29 / 53
  • 109. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Licensed under CSE 221: Algorithms 29 / 53
  • 110. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Licensed under CSE 221: Algorithms 29 / 53
  • 111. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Optimal value: 8 Optimal solution: {5, 3, 1} Licensed under CSE 221: Algorithms 29 / 53
  • 112. Introduction Memoization Dynamic programming Weighted interval sched Weighted Interval Scheduling DP algorithm in action Optimal value: 8 Optimal solution: {1, 3, 5} Licensed under CSE 221: Algorithms 29 / 53
  • 113. Introduction Memoization Dynamic programming Weighted interval sched So, you think you understand Dynamic Programming now? Answer the following questions 1 Instead of sorting the intervals by finish time, what if we sorted the requests by start time? Licensed under CSE 221: Algorithms 30 / 53
  • 114. Introduction Memoization Dynamic programming Weighted interval sched So, you think you understand Dynamic Programming now? Answer the following questions 1 Instead of sorting the intervals by finish time, what if we sorted the requests by start time? 2 What if we didn’t sort the requests at all? Would it still work? Licensed under CSE 221: Algorithms 30 / 53
  • 115. Introduction Memoization Dynamic programming Weighted interval sched So, you think you understand Dynamic Programming now? Answer the following questions 1 Instead of sorting the intervals by finish time, what if we sorted the requests by start time? 2 What if we didn’t sort the requests at all? Would it still work? 3 If all the weights are the same, what does this problem become? Licensed under CSE 221: Algorithms 30 / 53
  • 116. Introduction Memoization Dynamic programming Weighted interval sched So, you think you understand Dynamic Programming now? Answer the following questions 1 Instead of sorting the intervals by finish time, what if we sorted the requests by start time? 2 What if we didn’t sort the requests at all? Would it still work? 3 If all the weights are the same, what does this problem become? Can you solve it using DP? Licensed under CSE 221: Algorithms 30 / 53
  • 117. Introduction Memoization Dynamic programming Weighted interval sched Contents 1 Introduction Memoization Dynamic programming Weighted interval scheduling problem 0/1 Knapsack problem Coin changing problem What problems can be solved by DP? Conclusion Licensed under CSE 221: Algorithms 31 / 53
  • 118. Introduction Memoization Dynamic programming Weighted interval sched 0/1 knapsack problem Definition (0/1 knapsack problem) Given a set S of n items, such that each item i has a positive benefit vi and a positive weight wi , the goal is to find the maximum-benefit subset that does not exceed a given weight W . Licensed under CSE 221: Algorithms 32 / 53
  • 119. Introduction Memoization Dynamic programming Weighted interval sched 0/1 knapsack problem Definition (0/1 knapsack problem) Given a set S of n items, such that each item i has a positive benefit vi and a positive weight wi , the goal is to find the maximum-benefit subset that does not exceed a given weight W . Formally, we wish to determine a subset of S that maximizes P i∈S vi , subject to P i∈S wi ≤ W . Licensed under CSE 221: Algorithms 32 / 53
  • 120. Introduction Memoization Dynamic programming Weighted interval sched 0/1 knapsack problem Definition (0/1 knapsack problem) Given a set S of n items, such that each item i has a positive benefit vi and a positive weight wi , the goal is to find the maximum-benefit subset that does not exceed a given weight W . Formally, we wish to determine a subset of S that maximizes P i∈S vi , subject to P i∈S wi ≤ W . Maximum weight: W = 4 kg Licensed under CSE 221: Algorithms 32 / 53
  • 121. Introduction Memoization Dynamic programming Weighted interval sched 0/1 knapsack problem Definition (0/1 knapsack problem) Given a set S of n items, such that each item i has a positive benefit vi and a positive weight wi , the goal is to find the maximum-benefit subset that does not exceed a given weight W . Formally, we wish to determine a subset of S that maximizes P i∈S vi , subject to P i∈S wi ≤ W . Maximum weight: W = 4 kg Optimal solution: items B and C Benefit: 370 Licensed under CSE 221: Algorithms 32 / 53
  • 122. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let S be an instance of a 0/1 Knapsack problem, and ϑ be an optimal solution (even if we have no idea what it is yet). Licensed under CSE 221: Algorithms 33 / 53
  • 123. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let S be an instance of a 0/1 Knapsack problem, and ϑ be an optimal solution (even if we have no idea what it is yet). Note that the presence of an item i in ϑ does not preclude any other item j 6= i in ϑ. Licensed under CSE 221: Algorithms 33 / 53
  • 124. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let S be an instance of a 0/1 Knapsack problem, and ϑ be an optimal solution (even if we have no idea what it is yet). Note that the presence of an item i in ϑ does not preclude any other item j 6= i in ϑ. If item n weighs more than the maximum allowed weight, it will not be in ϑ. Licensed under CSE 221: Algorithms 33 / 53
  • 125. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let S be an instance of a 0/1 Knapsack problem, and ϑ be an optimal solution (even if we have no idea what it is yet). Note that the presence of an item i in ϑ does not preclude any other item j 6= i in ϑ. If item n weighs more than the maximum allowed weight, it will not be in ϑ. Otherwise, all we can say about ϑ is the following: item n (the last one) either belongs to ϑ, or it doesn’t. Licensed under CSE 221: Algorithms 33 / 53
  • 126. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let S be an instance of a 0/1 Knapsack problem, and ϑ be an optimal solution (even if we have no idea what it is yet). Note that the presence of an item i in ϑ does not preclude any other item j 6= i in ϑ. If item n weighs more than the maximum allowed weight, it will not be in ϑ. Otherwise, all we can say about ϑ is the following: item n (the last one) either belongs to ϑ, or it doesn’t. If n ∈ ϑ Then the optimal solution contains n, plus an optimal solution for the other n − 1 items, but with a reduced maximum weight of W − wn. Licensed under CSE 221: Algorithms 33 / 53
  • 127. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let S be an instance of a 0/1 Knapsack problem, and ϑ be an optimal solution (even if we have no idea what it is yet). Note that the presence of an item i in ϑ does not preclude any other item j 6= i in ϑ. If item n weighs more than the maximum allowed weight, it will not be in ϑ. Otherwise, all we can say about ϑ is the following: item n (the last one) either belongs to ϑ, or it doesn’t. If n ∈ ϑ Then the optimal solution contains n, plus an optimal solution for the other n − 1 items, but with a reduced maximum weight of W − wn. If n / ∈ ϑ Then ϑ simply contains an optimal solution for the first n − 1 items, with the maximum allowed weight W remaining unchanged. Licensed under CSE 221: Algorithms 33 / 53
  • 128. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Let S be an instance of a 0/1 Knapsack problem, and ϑ be an optimal solution (even if we have no idea what it is yet). Note that the presence of an item i in ϑ does not preclude any other item j 6= i in ϑ. If item n weighs more than the maximum allowed weight, it will not be in ϑ. Otherwise, all we can say about ϑ is the following: item n (the last one) either belongs to ϑ, or it doesn’t. If n ∈ ϑ Then the optimal solution contains n, plus an optimal solution for the other n − 1 items, but with a reduced maximum weight of W − wn. If n / ∈ ϑ Then ϑ simply contains an optimal solution for the first n − 1 items, with the maximum allowed weight W remaining unchanged. We have two parameters for each subproblem – the items S, and the maximum allowed weight W . Licensed under CSE 221: Algorithms 33 / 53
  • 129. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) wn W =⇒ n / ∈ ϑ. ϑ(n, W ) = ϑ(n − 1, W ) Licensed under CSE 221: Algorithms 34 / 53
  • 130. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) wn W =⇒ n / ∈ ϑ. ϑ(n, W ) = ϑ(n − 1, W ) Otherwise, n is either ∈ ϑ or / ∈ ϑ. If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the subproblem for items {1, 2, . . . , n}: ϑ(n, W ) = vn + ϑ(n − 1, W − wn) Licensed under CSE 221: Algorithms 34 / 53
  • 131. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) wn W =⇒ n / ∈ ϑ. ϑ(n, W ) = ϑ(n − 1, W ) Otherwise, n is either ∈ ϑ or / ∈ ϑ. If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the subproblem for items {1, 2, . . . , n}: ϑ(n, W ) = vn + ϑ(n − 1, W − wn) If n / ∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to the subproblem consisting of the intervals {1, 2, . . . , n − 1}: ϑ(n, W ) = ϑ(n − 1, W ) Licensed under CSE 221: Algorithms 34 / 53
  • 132. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) wn W =⇒ n / ∈ ϑ. ϑ(n, W ) = ϑ(n − 1, W ) Otherwise, n is either ∈ ϑ or / ∈ ϑ. If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the subproblem for items {1, 2, . . . , n}: ϑ(n, W ) = vn + ϑ(n − 1, W − wn) If n / ∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to the subproblem consisting of the intervals {1, 2, . . . , n − 1}: ϑ(n, W ) = ϑ(n − 1, W ) Since an optimal solution must maximize the sum of the weights in the intervals it contains, we accept the larger of the two. Licensed under CSE 221: Algorithms 34 / 53
  • 133. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) wn W =⇒ n / ∈ ϑ. ϑ(n, W ) = ϑ(n − 1, W ) Otherwise, n is either ∈ ϑ or / ∈ ϑ. If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the subproblem for items {1, 2, . . . , n}: ϑ(n, W ) = vn + ϑ(n − 1, W − wn) If n / ∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to the subproblem consisting of the intervals {1, 2, . . . , n − 1}: ϑ(n, W ) = ϑ(n − 1, W ) Since an optimal solution must maximize the sum of the weights in the intervals it contains, we accept the larger of the two. ϑ(n, W ) = max(vn + ϑ(n − 1, W − wn), ϑ(n − 1, W )) Licensed under CSE 221: Algorithms 34 / 53
  • 134. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Recursive algorithm for an optimal value If OPT(j, w) is an optimal solution to the subproblem for items {1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, and with a maximum allowed weight of w, then: OPT(j, w) =      OPT(j − 1, w) if wj w, max(vj + OPT(j − 1, w − wj), OPT(j − 1, w)) otherwise. Licensed under CSE 221: Algorithms 35 / 53
  • 135. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) Recursive algorithm for an optimal value If OPT(j, w) is an optimal solution to the subproblem for items {1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, and with a maximum allowed weight of w, then: OPT(j, w) =      OPT(j − 1, w) if wj w, max(vj + OPT(j − 1, w − wj), OPT(j − 1, w)) otherwise. Extracting the items in an optimal solution The item j is in an optimal solution OPT(j, w) if and only if the first of the two options is larger than the second. vj + OPT(j − 1, w − wj) ≥ OPT(j − 1, w) Licensed under CSE 221: Algorithms 35 / 53
  • 136. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif wj w 4 then return Knapsack(j − 1, w)) 5 else return max(vj + Knapsack(j − 1, w − wj), Knapsack(j − 1, w)) Licensed under CSE 221: Algorithms 36 / 53
  • 137. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif wj w 4 then return Knapsack(j − 1, w)) 5 else return max(vj + Knapsack(j − 1, w − wj), Knapsack(j − 1, w)) The initial call is Knapsack(n, W ). Licensed under CSE 221: Algorithms 36 / 53
  • 138. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif wj w 4 then return Knapsack(j − 1, w)) 5 else return max(vj + Knapsack(j − 1, w − wj), Knapsack(j − 1, w)) The initial call is Knapsack(n, W ). The tree grows very rapidly, leading to exponential running time. Licensed under CSE 221: Algorithms 36 / 53
  • 139. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif wj w 4 then return Knapsack(j − 1, w)) 5 else return max(vj + Knapsack(j − 1, w − wj), Knapsack(j − 1, w)) The initial call is Knapsack(n, W ). The tree grows very rapidly, leading to exponential running time. There are many overlapping subproblems, so the obvious choice is to memoize the recursion. Licensed under CSE 221: Algorithms 36 / 53
  • 140. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif M[j, w] is empty 4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj), M-Knapsack(j − 1, w)) 5 return M[j, w] Licensed under CSE 221: Algorithms 37 / 53
  • 141. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif M[j, w] is empty 4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj), M-Knapsack(j − 1, w)) 5 return M[j, w] Each entry in M[j, w] gets filled in only once at Θ(1) time, and there are n + 1 × W + 1 entries, so M-Knapsack(n, W ) takes Θ(nW ) time. Licensed under CSE 221: Algorithms 37 / 53
  • 142. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif M[j, w] is empty 4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj), M-Knapsack(j − 1, w)) 5 return M[j, w] Each entry in M[j, w] gets filled in only once at Θ(1) time, and there are n + 1 × W + 1 entries, so M-Knapsack(n, W ) takes Θ(nW ) time. Is this a linear-time algorithm? Licensed under CSE 221: Algorithms 37 / 53
  • 143. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-Knapsack(j, w) 1 if j = 0 or w = 0 2 then return 0 3 elseif M[j, w] is empty 4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj), M-Knapsack(j − 1, w)) 5 return M[j, w] Each entry in M[j, w] gets filled in only once at Θ(1) time, and there are n + 1 × W + 1 entries, so M-Knapsack(n, W ) takes Θ(nW ) time. Is this a linear-time algorithm? This is an example of a pseudo-polynomial problem, since it depends on another parameter W that is independent of the problem size. Licensed under CSE 221: Algorithms 37 / 53
  • 144. Introduction Memoization Dynamic programming Weighted interval sched Developing a Dynamic Programming algorithm Knapsack(n, W ) 1 for i ← 0 to n no remaining capacity 2 do M[i, 0] ← 0 3 for w ← 0 to W no item to choose from 4 do M[0, w] ← 0 5 for j ← 1 to n 6 do for w ← 1 to W 7 do if wj w //we cannot take object j 8 then M[j, w] = M[j − 1, w] 9 else M[j, w] ← max(vj + M[j − 1, w − wj], M[j − 1, w]) 10 return M[n, W ] Licensed under CSE 221: Algorithms 38 / 53
  • 145. Introduction Memoization Dynamic programming Weighted interval sched 0/1 Knapsack recursive algorithm in action Given the following (from M. H. Alsuwaiyel, ex. 7.6): W = 9 wi = {2, 3, 4, 5} vi = {3, 4, 5, 7} Licensed under CSE 221: Algorithms 39 / 53
  • 146. Introduction Memoization Dynamic programming Weighted interval sched 0/1 Knapsack recursive algorithm in action Given the following (from M. H. Alsuwaiyel, ex. 7.6): W = 9 wi = {2, 3, 4, 5} vi = {3, 4, 5, 7} Licensed under CSE 221: Algorithms 39 / 53
  • 147. Introduction Memoization Dynamic programming Weighted interval sched 0/1 Knapsack DP algorithm in action Given the following (from M. H. Alsuwaiyel, ex. 7.6): W = 9 wi = {2, 3, 4, 5} vi = {3, 4, 5, 7} Licensed under CSE 221: Algorithms 40 / 53
  • 148. Introduction Memoization Dynamic programming Weighted interval sched 0/1 Knapsack DP algorithm in action Given the following (from M. H. Alsuwaiyel, ex. 7.6): W = 9 wi = {2, 3, 4, 5} vi = {3, 4, 5, 7} Licensed under CSE 221: Algorithms 40 / 53
  • 149. Introduction Memoization Dynamic programming Weighted interval sched Related problem: Subset Sums problem Definition (Subset Sums problem) Given a set S of n items, such that each item i has a positive weight wi , the goal is to find the maximum-weight subset that does not exceed a given weight W . Licensed under CSE 221: Algorithms 41 / 53
  • 150. Introduction Memoization Dynamic programming Weighted interval sched Related problem: Subset Sums problem Definition (Subset Sums problem) Given a set S of n items, such that each item i has a positive weight wi , the goal is to find the maximum-weight subset that does not exceed a given weight W . Formally, we wish to determine a subset of S that maximizes P i∈S wi , subject to P i∈S wi ≤ W . Licensed under CSE 221: Algorithms 41 / 53
  • 151. Introduction Memoization Dynamic programming Weighted interval sched Related problem: Subset Sums problem Definition (Subset Sums problem) Given a set S of n items, such that each item i has a positive weight wi , the goal is to find the maximum-weight subset that does not exceed a given weight W . Formally, we wish to determine a subset of S that maximizes P i∈S wi , subject to P i∈S wi ≤ W . How is this similar to the 0/1 Knapsack problem? Licensed under CSE 221: Algorithms 41 / 53
  • 152. Introduction Memoization Dynamic programming Weighted interval sched Related problem: Subset Sums problem Definition (Subset Sums problem) Given a set S of n items, such that each item i has a positive weight wi , the goal is to find the maximum-weight subset that does not exceed a given weight W . Formally, we wish to determine a subset of S that maximizes P i∈S wi , subject to P i∈S wi ≤ W . How is this similar to the 0/1 Knapsack problem? Can you solve this using the same algorithm? Licensed under CSE 221: Algorithms 41 / 53
  • 153. Introduction Memoization Dynamic programming Weighted interval sched Contents 1 Introduction Memoization Dynamic programming Weighted interval scheduling problem 0/1 Knapsack problem Coin changing problem What problems can be solved by DP? Conclusion Licensed under CSE 221: Algorithms 42 / 53
  • 154. Introduction Memoization Dynamic programming Weighted interval sched Coin changing problem Definition Given coin denominations in C = {ci }, make change for a given amount A with the minimum number of coins. Licensed under CSE 221: Algorithms 43 / 53
  • 155. Introduction Memoization Dynamic programming Weighted interval sched Coin changing problem Definition Given coin denominations in C = {ci }, make change for a given amount A with the minimum number of coins. Example Coin denominations, C = {12, 5, 1} Amount to change, A = 15 Licensed under CSE 221: Algorithms 43 / 53
  • 156. Introduction Memoization Dynamic programming Weighted interval sched Coin changing problem Definition Given coin denominations in C = {ci }, make change for a given amount A with the minimum number of coins. Example Coin denominations, C = {12, 5, 1} Amount to change, A = 15 1 Choose 0 12 coins, so remaining is 15 Licensed under CSE 221: Algorithms 43 / 53
  • 157. Introduction Memoization Dynamic programming Weighted interval sched Coin changing problem Definition Given coin denominations in C = {ci }, make change for a given amount A with the minimum number of coins. Example Coin denominations, C = {12, 5, 1} Amount to change, A = 15 1 Choose 0 12 coins, so remaining is 15 2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0 Licensed under CSE 221: Algorithms 43 / 53
  • 158. Introduction Memoization Dynamic programming Weighted interval sched Coin changing problem Definition Given coin denominations in C = {ci }, make change for a given amount A with the minimum number of coins. Example Coin denominations, C = {12, 5, 1} Amount to change, A = 15 1 Choose 0 12 coins, so remaining is 15 2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0 Solution: 3 coins. Licensed under CSE 221: Algorithms 43 / 53
  • 159. Introduction Memoization Dynamic programming Weighted interval sched Coin changing problem Definition Given coin denominations in C = {ci }, make change for a given amount A with the minimum number of coins. Example Coin denominations, C = {12, 5, 1} Amount to change, A = 15 1 Choose 0 12 coins, so remaining is 15 2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0 Solution: 3 coins. Questions What is the natural search space? Does this problem have a Dynamic Programming solution? If so, how do we develop it? Licensed under CSE 221: Algorithms 43 / 53
  • 160. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 Licensed under CSE 221: Algorithms 44 / 53
  • 161. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 The best combination of coins for 15 paisa must be one of the following: Licensed under CSE 221: Algorithms 44 / 53
  • 162. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 The best combination of coins for 15 paisa must be one of the following: 1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin. Licensed under CSE 221: Algorithms 44 / 53
  • 163. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 The best combination of coins for 15 paisa must be one of the following: 1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin. 2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin. Licensed under CSE 221: Algorithms 44 / 53
  • 164. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 The best combination of coins for 15 paisa must be one of the following: 1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin. 2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin. 3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin. Licensed under CSE 221: Algorithms 44 / 53
  • 165. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 The best combination of coins for 15 paisa must be one of the following: 1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin. 2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin. 3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin. Since we’re minimizing the number of coins, the best combination would be the minimum of these three choices. Licensed under CSE 221: Algorithms 44 / 53
  • 166. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 The best combination of coins for 15 paisa must be one of the following: 1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin. 2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin. 3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin. Since we’re minimizing the number of coins, the best combination would be the minimum of these three choices. By recursively solving for the best combination, this can be generalized to |C| denominations to make change for any amount A. Licensed under CSE 221: Algorithms 44 / 53
  • 167. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution Coin denominations, C = {12, 5, 1} Amount to change, A = 15 The best combination of coins for 15 paisa must be one of the following: 1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin. 2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin. 3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin. Since we’re minimizing the number of coins, the best combination would be the minimum of these three choices. By recursively solving for the best combination, this can be generalized to |C| denominations to make change for any amount A. What are the subproblems? Licensed under CSE 221: Algorithms 44 / 53
  • 168. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If OPT(p) is the minimum number of coins needed to make change for amount p with denominations C = {c1, c2, . . . , ck}, then: Licensed under CSE 221: Algorithms 45 / 53
  • 169. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If OPT(p) is the minimum number of coins needed to make change for amount p with denominations C = {c1, c2, . . . , ck}, then: The coin ci chosen at any step must be smaller than p, the amount left at that point. Licensed under CSE 221: Algorithms 45 / 53
  • 170. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If OPT(p) is the minimum number of coins needed to make change for amount p with denominations C = {c1, c2, . . . , ck}, then: The coin ci chosen at any step must be smaller than p, the amount left at that point. Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we have to find the best combination for the remaining amount (picking a coin smaller than the amount at each step). Licensed under CSE 221: Algorithms 45 / 53
  • 171. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If OPT(p) is the minimum number of coins needed to make change for amount p with denominations C = {c1, c2, . . . , ck}, then: The coin ci chosen at any step must be smaller than p, the amount left at that point. Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we have to find the best combination for the remaining amount (picking a coin smaller than the amount at each step). Since we don’t know which coin would be chosen, we have to search all |C| denominations and find the minimum. Licensed under CSE 221: Algorithms 45 / 53
  • 172. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If OPT(p) is the minimum number of coins needed to make change for amount p with denominations C = {c1, c2, . . . , ck}, then: The coin ci chosen at any step must be smaller than p, the amount left at that point. Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we have to find the best combination for the remaining amount (picking a coin smaller than the amount at each step). Since we don’t know which coin would be chosen, we have to search all |C| denominations and find the minimum. The number of coins for 0 amount is 0. Licensed under CSE 221: Algorithms 45 / 53
  • 173. Introduction Memoization Dynamic programming Weighted interval sched Developing a recursive solution (continued) If OPT(p) is the minimum number of coins needed to make change for amount p with denominations C = {c1, c2, . . . , ck}, then: The coin ci chosen at any step must be smaller than p, the amount left at that point. Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we have to find the best combination for the remaining amount (picking a coin smaller than the amount at each step). Since we don’t know which coin would be chosen, we have to search all |C| denominations and find the minimum. The number of coins for 0 amount is 0. Recurrence OPT(p) = ( 0 if p = 0 mini:ci ≤p{1 + OPT(p − ci )} if p 0 Licensed under CSE 221: Algorithms 45 / 53
  • 174. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Change(n, C) 1 if n = 0 2 then return 0 3 else min ← ∞ 4 for i ← 1 to |C| 5 do if ci ≤ n and 1 + Change(n − ci , C) min 6 then min ← 1 + Change(n − ci , C) Licensed under CSE 221: Algorithms 46 / 53
  • 175. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Change(n, C) 1 if n = 0 2 then return 0 3 else min ← ∞ 4 for i ← 1 to |C| 5 do if ci ≤ n and 1 + Change(n − ci , C) min 6 then min ← 1 + Change(n − ci , C) The initial call is Change(A, C). Licensed under CSE 221: Algorithms 46 / 53
  • 176. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Change(n, C) 1 if n = 0 2 then return 0 3 else min ← ∞ 4 for i ← 1 to |C| 5 do if ci ≤ n and 1 + Change(n − ci , C) min 6 then min ← 1 + Change(n − ci , C) The initial call is Change(A, C). The tree grows very rapidly, leading to exponential running time. Licensed under CSE 221: Algorithms 46 / 53
  • 177. Introduction Memoization Dynamic programming Weighted interval sched A recursive algorithm Change(n, C) 1 if n = 0 2 then return 0 3 else min ← ∞ 4 for i ← 1 to |C| 5 do if ci ≤ n and 1 + Change(n − ci , C) min 6 then min ← 1 + Change(n − ci , C) The initial call is Change(A, C). The tree grows very rapidly, leading to exponential running time. There are many overlapping subproblems, so the obvious choice is to memoize the recursion. Licensed under CSE 221: Algorithms 46 / 53
  • 178. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-Change(n, C) 1 if n = 0 2 then return 0 3 else if M[n] is empty 4 then min ← ∞ 5 for i ← 1 to |C| 6 do if ci ≤ n and 1 + M-Change(n − ci , C) min 7 then min ← 1 + M-Change(n − ci , C) 8 M[n] ← min 9 return M[n] Licensed under CSE 221: Algorithms 47 / 53
  • 179. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-Change(n, C) 1 if n = 0 2 then return 0 3 else if M[n] is empty 4 then min ← ∞ 5 for i ← 1 to |C| 6 do if ci ≤ n and 1 + M-Change(n − ci , C) min 7 then min ← 1 + M-Change(n − ci , C) 8 M[n] ← min 9 return M[n] Each entry in M[n] gets filled in only once at Θ(|C|) time, and there are n + 1 entries, so M-Change(n) takes Θ(n|C|) time. Licensed under CSE 221: Algorithms 47 / 53
  • 180. Introduction Memoization Dynamic programming Weighted interval sched Memoizing the recursion M-Change(n, C) 1 if n = 0 2 then return 0 3 else if M[n] is empty 4 then min ← ∞ 5 for i ← 1 to |C| 6 do if ci ≤ n and 1 + M-Change(n − ci , C) min 7 then min ← 1 + M-Change(n − ci , C) 8 M[n] ← min 9 return M[n] Each entry in M[n] gets filled in only once at Θ(|C|) time, and there are n + 1 entries, so M-Change(n) takes Θ(n|C|) time. Another pseudo-polynomial problem! Licensed under CSE 221: Algorithms 47 / 53
  • 181. Introduction Memoization Dynamic programming Weighted interval sched Developing a Dynamic Programming algorithm Change(n, C) M = [0 . . n], S = [0 . . n] 1 M[0] ← 0 no amount to change 2 for p ← 1 to n 3 do min ← ∞ 4 for i ← 1 to |C| 5 do if ci ≤ p and 1 + M[p − ci ] min 6 then min ← 1 + M[p − ci ] 7 coin ← i 8 M[p] ← min 9 S[p] ← coin 10 return M and S Licensed under CSE 221: Algorithms 48 / 53
  • 182. Introduction Memoization Dynamic programming Weighted interval sched Developing a Dynamic Programming algorithm Change(n, C) M = [0 . . n], S = [0 . . n] 1 M[0] ← 0 no amount to change 2 for p ← 1 to n 3 do min ← ∞ 4 for i ← 1 to |C| 5 do if ci ≤ p and 1 + M[p − ci ] min 6 then min ← 1 + M[p − ci ] 7 coin ← i 8 M[p] ← min 9 S[p] ← coin 10 return M and S M[p] for all 0 ≤ p ≤ n – minimum number of coins needed to change for p paisa. S[p] for all 0 ≤ p ≤ n – the first coin chosen in computing an optimal solution for making change for p paise. Licensed under CSE 221: Algorithms 48 / 53
  • 183. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values The S array in the algorithm “remembers” the first coin we use when computing an optimal value for a given amount. We go backwards using S[n] until n = 0 and find the coin that was added at each step. Licensed under CSE 221: Algorithms 49 / 53
  • 184. Introduction Memoization Dynamic programming Weighted interval sched Computing a solution in addition to its values The S array in the algorithm “remembers” the first coin we use when computing an optimal value for a given amount. We go backwards using S[n] until n = 0 and find the coin that was added at each step. Coins(S, C, n) 1 while n 0 2 do Output S[n] 3 n ← n − CS[n] Licensed under CSE 221: Algorithms 49 / 53
  • 185. Introduction Memoization Dynamic programming Weighted interval sched Contents 1 Introduction Memoization Dynamic programming Weighted interval scheduling problem 0/1 Knapsack problem Coin changing problem What problems can be solved by DP? Conclusion Licensed under CSE 221: Algorithms 50 / 53
  • 186. Introduction Memoization Dynamic programming Weighted interval sched Problem types solved by Dynamic Programming The most important part of DP is to set up the subproblem structure. Licensed under CSE 221: Algorithms 51 / 53
  • 187. Introduction Memoization Dynamic programming Weighted interval sched Problem types solved by Dynamic Programming The most important part of DP is to set up the subproblem structure. DP is not applicable to all optimization problems. Licensed under CSE 221: Algorithms 51 / 53
  • 188. Introduction Memoization Dynamic programming Weighted interval sched Problem types solved by Dynamic Programming The most important part of DP is to set up the subproblem structure. DP is not applicable to all optimization problems. If a problem has the following properties, then it’s likely to have a dynamic programming solution. Licensed under CSE 221: Algorithms 51 / 53
  • 189. Introduction Memoization Dynamic programming Weighted interval sched Problem types solved by Dynamic Programming The most important part of DP is to set up the subproblem structure. DP is not applicable to all optimization problems. If a problem has the following properties, then it’s likely to have a dynamic programming solution. Polynomially many subproblems The total number of subproblems should be a polynomial, or else DP may not provide an efficient solution. Licensed under CSE 221: Algorithms 51 / 53
  • 190. Introduction Memoization Dynamic programming Weighted interval sched Problem types solved by Dynamic Programming The most important part of DP is to set up the subproblem structure. DP is not applicable to all optimization problems. If a problem has the following properties, then it’s likely to have a dynamic programming solution. Polynomially many subproblems The total number of subproblems should be a polynomial, or else DP may not provide an efficient solution. Subproblem optimality If the optimal solution to the entire problem contain optimal solution to the subproblems, then it has the subproblem optimality property. Also called the principle of optimality. Licensed under CSE 221: Algorithms 51 / 53
  • 191. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming highlights Dynamic Programming, just like Memoization, avoids computing solutions to overlapping subproblems by saving intermediate results, and thus both require space for the “table”. Licensed under CSE 221: Algorithms 52 / 53
  • 192. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming highlights Dynamic Programming, just like Memoization, avoids computing solutions to overlapping subproblems by saving intermediate results, and thus both require space for the “table”. Dynamic Programming is a bottom-up techniques, and finds the solution by starting from the base case(s) and works its way upwards. Licensed under CSE 221: Algorithms 52 / 53
  • 193. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming highlights Dynamic Programming, just like Memoization, avoids computing solutions to overlapping subproblems by saving intermediate results, and thus both require space for the “table”. Dynamic Programming is a bottom-up techniques, and finds the solution by starting from the base case(s) and works its way upwards. Developing a Dynamic Programming solution often requires some thought into the subproblems, especially how to find the natural order in which to solve the subproblems. Licensed under CSE 221: Algorithms 52 / 53
  • 194. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming highlights Dynamic Programming, just like Memoization, avoids computing solutions to overlapping subproblems by saving intermediate results, and thus both require space for the “table”. Dynamic Programming is a bottom-up techniques, and finds the solution by starting from the base case(s) and works its way upwards. Developing a Dynamic Programming solution often requires some thought into the subproblems, especially how to find the natural order in which to solve the subproblems. Unlike Memoization, which solves only the needed subproblems, DP solves all the subproblems, because it does it bottom-up. Licensed under CSE 221: Algorithms 52 / 53
  • 195. Introduction Memoization Dynamic programming Weighted interval sched Dynamic Programming highlights Dynamic Programming, just like Memoization, avoids computing solutions to overlapping subproblems by saving intermediate results, and thus both require space for the “table”. Dynamic Programming is a bottom-up techniques, and finds the solution by starting from the base case(s) and works its way upwards. Developing a Dynamic Programming solution often requires some thought into the subproblems, especially how to find the natural order in which to solve the subproblems. Unlike Memoization, which solves only the needed subproblems, DP solves all the subproblems, because it does it bottom-up. Dynamic Programming on the other hand may be much more efficient because its iterative, whereas Memoization must pay for the (often significant) overhead due to recursion. Licensed under CSE 221: Algorithms 52 / 53
  • 196. Introduction Memoization Dynamic programming Weighted interval sched Conclusion Memoization is the top-down technique, and dynamic programming is a bottom-up technique. The key to Dynamic programming is in “intelligent” recursion (the hard part), not in filling up the table (the easy part). Dynamic Programming has the potential to transform exponential-time brute-force solutions into polynomial-time algorithms. Greed does not pay, Dynamic Programming does! Licensed under CSE 221: Algorithms 53 / 53