SlideShare a Scribd company logo
Dynamic Programming
Dynamic Programming
•

Well known algorithm design techniques:.
– Divide-and-conquer algorithms

•

Another strategy for designing algorithms is dynamic
programming.
– Used when problem breaks down into recurring small
subproblems

•

Dynamic programming is typically applied to
optimization problems. In such problem there can be
many solutions. Each solution has a value, and we
wish to find a solution with the optimal value.
Divide-and-conquer
• Divide-and-conquer method for algorithm design:
• Divide: If the input size is too large to deal with in a
straightforward manner, divide the problem into two or
more disjoint subproblems
• Conquer: conquer recursively to solve the subproblems

• Combine: Take the solutions to the subproblems and
“merge” these solutions into a solution for the original
problem
Divide-and-conquer - Example
Dynamic Programming
Dynamic Programming is a general algorithm design technique
for solving problems defined by recurrences with overlapping
subproblems
• Invented by American mathematician Richard Bellman in the
1950s to solve optimization problems and later assimilated by CS

• “Programming” here means “planning”
• Main idea:
- set up a recurrence relating a solution to a larger instance to
solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
5
Dynamic programming
• Dynamic programming is a way of improving on inefficient divideand-conquer algorithms.
•

By “inefficient”, we mean that the same recursive call is made
over and over.

•

If same subproblem is solved several times, we can use table to
store result of a subproblem the first time it is computed and thus
never have to recompute it again.

• Dynamic programming is applicable when the subproblems are
dependent, that is, when subproblems share subsubproblems.

•

“Programming” refers to a tabular method
Difference between DP and Divideand-Conquer
• Using Divide-and-Conquer to solve these
problems is inefficient because the same
common subproblems have to be solved many
times.

• DP will solve each of them once and their
answers are stored in a table for future use.
Dynamic Programming vs. Recursion
and Divide & Conquer
• In a recursive program, a problem of size n is
solved by first solving a sub-problem of size n-1.
• In a divide & conquer program, you solve a
problem of size n by first solving a sub-problem
of size k and another of size k-1, where 1 < k <
n.
• In dynamic programming, you solve a problem
of size n by first solving all sub-problems of all
sizes k, where k < n.
Elements of Dynamic Programming
(DP)
DP is used to solve problems with the following characteristics:
• Simple subproblems
– We should be able to break the original problem to smaller
subproblems that have the same structure

•

Optimal substructure of the problems
– The optimal solution to the problem contains within optimal
solutions to its subproblems.

•

Overlapping sub-problems
– there exist some places where we solve the same subproblem more
than once.
Steps to Designing a
Dynamic Programming Algorithm
1. Characterize optimal substructure
2. Recursively define the value of an optimal
solution

3. Compute the value bottom up
4. (if needed) Construct an optimal solution
Principle of Optimality
• The dynamic Programming works on a principle
of optimality.
• Principle of optimality states that in an optimal
sequence of decisions or choices, each sub
sequences must also be optimal.
Example Applications of Dynamic
Programming
•
•
•
•
•
•

1/0 Knapsack
Optimal Merge portions
Shortest path problems
Matrix chain multiplication
Longest common subsequence
Mathematical optimization
Example 1: Fibonacci numbers
• Recall definition of Fibonacci numbers:
F(n) = F(n-1) + F(n-2)
F(0) = 0
F(1) = 1
• Computing the nth Fibonacci number recursively (top-down):
F(n)
F(n-1)
F(n-2)

+

+
F(n-3)

F(n-2)
F(n-3)

+

F(n-4)

...
13
Fibonacci Numbers
• Fn= Fn-1+ Fn-2
n≥2
• F0 =0, F1 =1
• 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, …

• Straightforward recursive procedure is slow!
• Let’s draw the recursion tree
Fibonacci Numbers
Fibonacci Numbers
•

How many summations are there? Using Golden Ratio

• As you go farther and farther to the right in this sequence, the ratio
of a term to the one before it will get closer and closer to the Golden
Ratio.
• Our recursion tree has only 0s and 1s as leaves, thus we have 1.6n
summations
• Running time is exponential!
Fibonacci Numbers
• We can calculate Fn in linear time by remembering
solutions to the solved subproblems – dynamic
programming

• Compute solution in a bottom-up fashion
• In this case, only two values need to be
remembered at any time
Matrix Chain Multiplication
• Given : a chain of matrices {A1,A2,…,An}.
• Once all pairs of matrices are parenthesized, they can
be multiplied by using the standard algorithm as a subroutine.
• A product of matrices is fully parenthesized if it is either
a single matrix or the product of two fully parenthesized
matrix products, surrounded by parentheses. [Note: since
matrix multiplication is associative, all parenthesizations yield the

same product.]
Matrix Chain Multiplication cont.
• For example, if the chain of matrices is {A, B, C,
D}, the product A, B, C, D can be fully
parenthesized in 5 distinct ways:
(A ( B ( C D ))),
(A (( B C ) D )),
((A B ) ( C D )),
((A ( B C )) D),
((( A B ) C ) D ).

• The way the chain is parenthesized can have a
dramatic impact on the cost of evaluating the
product.
Matrix Chain Multiplication Optimal
Parenthesization
• Example: A[30][35], B[35][15], C[15][5]
minimum of A*B*C
A*(B*C) = 30*35*5 + 35*15*5 = 7,585
(A*B)*C = 30*35*15 + 30*15*5 = 18,000
• How to optimize:
– Brute force – look at every possible way to
parenthesize : Ω(4n/n3/2)
– Dynamic programming – time complexity of Ω(n3) and
space complexity of Θ(n2).
Matrix Chain Multiplication Structure of
Optimal Parenthesization
• For n matrices, let Ai..j be the result of AiAi+1….Aj
• An optimal parenthesization of AiAi+1…An splits
the product between Ak and Ak+1 where 1  k <
n.
• Example, k = 4
(A1A2A3A4)(A5A6)

Total cost of A1..6 = cost of A1..4 plus total
cost of multiplying these two matrices
together.
Matrix Chain Multiplication
Overlapping Sub-Problems
• Overlapping sub-problems helps in reducing the
running time considerably.
– Create a table M of minimum Costs
– Create a table S that records index k for each optimal subproblem
– Fill table M in a manner that corresponds to solving the
parenthesization problem on matrix chains of increasing
length.
– Compute cost for chains of length 1 (this is 0)
– Compute costs for chains of length 2

A1..2, A2..3, A3..4, …An-1…n

– Compute cost for chain of length n
A1..n
Each level relies on smaller sub-strings

More Related Content

PPTX
sum of subset problem using Backtracking
PPT
Dynamic pgmming
PDF
State Space Search in ai
PPTX
Bruteforce algorithm
PPTX
Introduction to Dynamic Programming, Principle of Optimality
PPTX
Artificial Intelligence Searching Techniques
PPTX
Tsp branch and-bound
PDF
I. AO* SEARCH ALGORITHM
sum of subset problem using Backtracking
Dynamic pgmming
State Space Search in ai
Bruteforce algorithm
Introduction to Dynamic Programming, Principle of Optimality
Artificial Intelligence Searching Techniques
Tsp branch and-bound
I. AO* SEARCH ALGORITHM

What's hot (20)

PPTX
NP completeness
PPT
Branch & bound
PPTX
Graph coloring using backtracking
PPTX
Stressen's matrix multiplication
PPTX
Greedy algorithm
PPTX
Travelling Salesman Problem
PPT
Graph algorithm
PDF
PPTX
Knapsack problem using greedy approach
PPT
Backtracking
PDF
Approximation Algorithms
DOC
Chapter 4 (final)
PPTX
Dynamic programming
PPTX
Water jug problem ai part 6
PPT
5.1 greedy
PPT
Problems, Problem spaces and Search
PPTX
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
PPTX
Introduction to dynamic programming
PPT
Lower bound
PPTX
Dynamic programming, Branch and bound algorithm & Greedy algorithms
NP completeness
Branch & bound
Graph coloring using backtracking
Stressen's matrix multiplication
Greedy algorithm
Travelling Salesman Problem
Graph algorithm
Knapsack problem using greedy approach
Backtracking
Approximation Algorithms
Chapter 4 (final)
Dynamic programming
Water jug problem ai part 6
5.1 greedy
Problems, Problem spaces and Search
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
Introduction to dynamic programming
Lower bound
Dynamic programming, Branch and bound algorithm & Greedy algorithms
Ad

Viewers also liked (20)

PPTX
Dynamic Programming - Matrix Chain Multiplication
PPT
Dynamic programming
PPTX
Matrix chain multiplication
PPTX
Dynamic Programming
PPTX
Elements of dynamic programming
PPTX
Matrix multiplication
PPTX
Dynamic programming Basics
PPT
Approximate Dynamic Programming: A New Paradigm for Process Control & Optimiz...
PPT
Dynamic programming
PDF
A software approach to mathematical programming
PPTX
A Multiple-Shooting Differential Dynamic Programming Algorithm
PPTX
(floyd's algm)
PPTX
Dynamic programming
PPTX
Dynamic Programming
PPT
5.3 dynamic programming
PPTX
Dynamic programming
PPTX
Daa:Dynamic Programing
PPT
The Floyd–Warshall algorithm
PDF
Lecture 5 6_7 - divide and conquer and method of solving recurrences
PPTX
Dynamic programming - fundamentals review
Dynamic Programming - Matrix Chain Multiplication
Dynamic programming
Matrix chain multiplication
Dynamic Programming
Elements of dynamic programming
Matrix multiplication
Dynamic programming Basics
Approximate Dynamic Programming: A New Paradigm for Process Control & Optimiz...
Dynamic programming
A software approach to mathematical programming
A Multiple-Shooting Differential Dynamic Programming Algorithm
(floyd's algm)
Dynamic programming
Dynamic Programming
5.3 dynamic programming
Dynamic programming
Daa:Dynamic Programing
The Floyd–Warshall algorithm
Lecture 5 6_7 - divide and conquer and method of solving recurrences
Dynamic programming - fundamentals review
Ad

Similar to Dynamic programming class 16 (20)

PPTX
Dynamic programming prasintation eaisy
PPTX
Module 2ppt.pptx divid and conquer method
PPTX
dynamic programming complete by Mumtaz Ali (03154103173)
PPTX
Dynamic programmng2
PPT
5.3 dynamic programming 03
PPT
Lecture11
PPT
Chapter 16
PPTX
Dynamic Programming - Part 1
PDF
Computer algorithm(Dynamic Programming).pdf
PPTX
Dynamic Programing.pptx good for understanding
PDF
Daa chapter 3
PPTX
Dynamic programming1
PPTX
week 9 lec 15.pptx
PDF
Unit 4 of design and analysis of algorithms
PDF
L21_L27_Unit_5_Dynamic_Programming Computer Science
PPT
dynamic-programming unit 3 power point presentation
PPTX
ADA Unit 2.pptx
PPTX
DynamicProgramming.pptx
PDF
Dynamic programming
PPTX
unit-4-dynamic programming
Dynamic programming prasintation eaisy
Module 2ppt.pptx divid and conquer method
dynamic programming complete by Mumtaz Ali (03154103173)
Dynamic programmng2
5.3 dynamic programming 03
Lecture11
Chapter 16
Dynamic Programming - Part 1
Computer algorithm(Dynamic Programming).pdf
Dynamic Programing.pptx good for understanding
Daa chapter 3
Dynamic programming1
week 9 lec 15.pptx
Unit 4 of design and analysis of algorithms
L21_L27_Unit_5_Dynamic_Programming Computer Science
dynamic-programming unit 3 power point presentation
ADA Unit 2.pptx
DynamicProgramming.pptx
Dynamic programming
unit-4-dynamic programming

More from Kumar (20)

PPT
Graphics devices
PPT
Fill area algorithms
PDF
region-filling
PDF
Bresenham derivation
PPT
Bresenham circles and polygons derication
PPTX
Introductionto xslt
PPTX
Extracting data from xml
PPTX
Xml basics
PPTX
XML Schema
PPTX
Publishing xml
PPTX
DTD
PPTX
Applying xml
PPTX
Introduction to XML
PDF
How to deploy a j2ee application
PDF
JNDI, JMS, JPA, XML
PDF
EJB Fundmentals
PDF
JSP and struts programming
PDF
java servlet and servlet programming
PDF
Introduction to JDBC and JDBC Drivers
PDF
Introduction to J2EE
Graphics devices
Fill area algorithms
region-filling
Bresenham derivation
Bresenham circles and polygons derication
Introductionto xslt
Extracting data from xml
Xml basics
XML Schema
Publishing xml
DTD
Applying xml
Introduction to XML
How to deploy a j2ee application
JNDI, JMS, JPA, XML
EJB Fundmentals
JSP and struts programming
java servlet and servlet programming
Introduction to JDBC and JDBC Drivers
Introduction to J2EE

Recently uploaded (20)

PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
Lesson notes of climatology university.
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
RMMM.pdf make it easy to upload and study
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
GDM (1) (1).pptx small presentation for students
PDF
01-Introduction-to-Information-Management.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
FourierSeries-QuestionsWithAnswers(Part-A).pdf
STATICS OF THE RIGID BODIES Hibbelers.pdf
Lesson notes of climatology university.
Pharmacology of Heart Failure /Pharmacotherapy of CHF
A systematic review of self-coping strategies used by university students to ...
Microbial disease of the cardiovascular and lymphatic systems
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
202450812 BayCHI UCSC-SV 20250812 v17.pptx
RMMM.pdf make it easy to upload and study
Anesthesia in Laparoscopic Surgery in India
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
VCE English Exam - Section C Student Revision Booklet
GDM (1) (1).pptx small presentation for students
01-Introduction-to-Information-Management.pdf
Final Presentation General Medicine 03-08-2024.pptx
Supply Chain Operations Speaking Notes -ICLT Program
Chinmaya Tiranga quiz Grand Finale.pdf
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf

Dynamic programming class 16

  • 2. Dynamic Programming • Well known algorithm design techniques:. – Divide-and-conquer algorithms • Another strategy for designing algorithms is dynamic programming. – Used when problem breaks down into recurring small subproblems • Dynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal value.
  • 3. Divide-and-conquer • Divide-and-conquer method for algorithm design: • Divide: If the input size is too large to deal with in a straightforward manner, divide the problem into two or more disjoint subproblems • Conquer: conquer recursively to solve the subproblems • Combine: Take the solutions to the subproblems and “merge” these solutions into a solution for the original problem
  • 5. Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems • Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS • “Programming” here means “planning” • Main idea: - set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once - record solutions in a table - extract solution to the initial instance from that table 5
  • 6. Dynamic programming • Dynamic programming is a way of improving on inefficient divideand-conquer algorithms. • By “inefficient”, we mean that the same recursive call is made over and over. • If same subproblem is solved several times, we can use table to store result of a subproblem the first time it is computed and thus never have to recompute it again. • Dynamic programming is applicable when the subproblems are dependent, that is, when subproblems share subsubproblems. • “Programming” refers to a tabular method
  • 7. Difference between DP and Divideand-Conquer • Using Divide-and-Conquer to solve these problems is inefficient because the same common subproblems have to be solved many times. • DP will solve each of them once and their answers are stored in a table for future use.
  • 8. Dynamic Programming vs. Recursion and Divide & Conquer • In a recursive program, a problem of size n is solved by first solving a sub-problem of size n-1. • In a divide & conquer program, you solve a problem of size n by first solving a sub-problem of size k and another of size k-1, where 1 < k < n. • In dynamic programming, you solve a problem of size n by first solving all sub-problems of all sizes k, where k < n.
  • 9. Elements of Dynamic Programming (DP) DP is used to solve problems with the following characteristics: • Simple subproblems – We should be able to break the original problem to smaller subproblems that have the same structure • Optimal substructure of the problems – The optimal solution to the problem contains within optimal solutions to its subproblems. • Overlapping sub-problems – there exist some places where we solve the same subproblem more than once.
  • 10. Steps to Designing a Dynamic Programming Algorithm 1. Characterize optimal substructure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution
  • 11. Principle of Optimality • The dynamic Programming works on a principle of optimality. • Principle of optimality states that in an optimal sequence of decisions or choices, each sub sequences must also be optimal.
  • 12. Example Applications of Dynamic Programming • • • • • • 1/0 Knapsack Optimal Merge portions Shortest path problems Matrix chain multiplication Longest common subsequence Mathematical optimization
  • 13. Example 1: Fibonacci numbers • Recall definition of Fibonacci numbers: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 • Computing the nth Fibonacci number recursively (top-down): F(n) F(n-1) F(n-2) + + F(n-3) F(n-2) F(n-3) + F(n-4) ... 13
  • 14. Fibonacci Numbers • Fn= Fn-1+ Fn-2 n≥2 • F0 =0, F1 =1 • 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, … • Straightforward recursive procedure is slow! • Let’s draw the recursion tree
  • 16. Fibonacci Numbers • How many summations are there? Using Golden Ratio • As you go farther and farther to the right in this sequence, the ratio of a term to the one before it will get closer and closer to the Golden Ratio. • Our recursion tree has only 0s and 1s as leaves, thus we have 1.6n summations • Running time is exponential!
  • 17. Fibonacci Numbers • We can calculate Fn in linear time by remembering solutions to the solved subproblems – dynamic programming • Compute solution in a bottom-up fashion • In this case, only two values need to be remembered at any time
  • 18. Matrix Chain Multiplication • Given : a chain of matrices {A1,A2,…,An}. • Once all pairs of matrices are parenthesized, they can be multiplied by using the standard algorithm as a subroutine. • A product of matrices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses. [Note: since matrix multiplication is associative, all parenthesizations yield the same product.]
  • 19. Matrix Chain Multiplication cont. • For example, if the chain of matrices is {A, B, C, D}, the product A, B, C, D can be fully parenthesized in 5 distinct ways: (A ( B ( C D ))), (A (( B C ) D )), ((A B ) ( C D )), ((A ( B C )) D), ((( A B ) C ) D ). • The way the chain is parenthesized can have a dramatic impact on the cost of evaluating the product.
  • 20. Matrix Chain Multiplication Optimal Parenthesization • Example: A[30][35], B[35][15], C[15][5] minimum of A*B*C A*(B*C) = 30*35*5 + 35*15*5 = 7,585 (A*B)*C = 30*35*15 + 30*15*5 = 18,000 • How to optimize: – Brute force – look at every possible way to parenthesize : Ω(4n/n3/2) – Dynamic programming – time complexity of Ω(n3) and space complexity of Θ(n2).
  • 21. Matrix Chain Multiplication Structure of Optimal Parenthesization • For n matrices, let Ai..j be the result of AiAi+1….Aj • An optimal parenthesization of AiAi+1…An splits the product between Ak and Ak+1 where 1  k < n. • Example, k = 4 (A1A2A3A4)(A5A6) Total cost of A1..6 = cost of A1..4 plus total cost of multiplying these two matrices together.
  • 22. Matrix Chain Multiplication Overlapping Sub-Problems • Overlapping sub-problems helps in reducing the running time considerably. – Create a table M of minimum Costs – Create a table S that records index k for each optimal subproblem – Fill table M in a manner that corresponds to solving the parenthesization problem on matrix chains of increasing length. – Compute cost for chains of length 1 (this is 0) – Compute costs for chains of length 2 A1..2, A2..3, A3..4, …An-1…n – Compute cost for chain of length n A1..n Each level relies on smaller sub-strings