SlideShare a Scribd company logo
Optimization Problems
• In which a set of choices must be made in
order to arrive at an optimal (min/max)
solution, subject to some constraints. (There
may be several solutions to achieve an
optimal value.)
• Two common techniques:
– Dynamic Programming (global)
– Greedy Algorithms (local)
Dynamic Programming
• Similar to divide-and-conquer, it breaks
problems down into smaller problems that are
solved recursively.
• In contrast, DP is applicable when the sub-
problems are not independent, i.e. when sub-
problems share sub-sub-problems. It solves
every sub-sub-problem just once and save the
results in a table to avoid duplicated
computation.
Elements of DP Algorithms
• Sub-structure: decompose problem into smaller sub-
problems. Express the solution of the original problem
in terms of solutions for smaller problems.
• Table-structure: Store the answers to the sub-problem
in a table, because sub-problem solutions may be used
many times.
• Bottom-up computation: combine solutions on smaller
sub-problems to solve larger sub-problems, and
eventually arrive at a solution to the complete problem.
Applicability to Optimization
Problems
• Optimal sub-structure (principle of optimality): for the global
problem to be solved optimally, each sub-problem should be
solved optimally. This is often violated due to sub-problem
overlaps. Often by being “less optimal” on one problem, we
may make a big savings on another sub-problem.
• Overlapping of sub-problems: Many NP-hard problems can
be formulated as DP problems, but these formulations are not
efficient, because the number of sub-problems is
exponentially large. Ideally, the number of sub-problems
should be at most a polynomial number.
Optimized Chain Operations
• Determine the optimal sequence for performing a series of
operations. (the general class of the problem is important
in compiler design for code optimization & in databases
for query optimization)
• For example: given a series of matrices: A1…An , we can
“parenthesize” this expression however we like, since matrix
multiplication is associative (but not commutative).
• Multiply a p x q matrix A times a q x r matrix B, the result will
be a p x r matrix C. (# of columns of A must be equal to # of
rows of B.)
Matrix Multiplication
• In particular for 1  i  p and 1  j  r,
C[i, j] = k = 1 to q A[i, k] B[k, j]
• Observe that there are pr total entries in C
and each takes O(q) time to compute, thus
the total time to multiply 2 matrices is pqr.
Chain Matrix Multiplication
• Given a sequence of matrices A1 A2…An , and
dimensions p0 p1…pn where Ai is of dimension pi-1
x pi , determine multiplication sequence that
minimizes the number of operations.
• This algorithm does not perform the
multiplication, it just figures out the best order in
which to perform the multiplication.
Example: CMM
• Consider 3 matrices: A1 be 5 x 4, A2 be 4 x 6,
and A3 be 6 x 2.
Mult[((A1 A2)A3)] = (5x4x6) + (5x6x2) = 180
Mult[(A1 (A2A3 ))] = (4x6x2) + (5x4x2) = 88
Even for this small example, considerable savings
can be achieved by reordering the evaluation
sequence.
DP Solution (I)
• Let Ai…j be the product of matrices i through j. Ai…j is a pi-1 x pj matrix. At the
highest level, we are multiplying two matrices together. That is, for any k, 1 
k  n-1,
A1…n = (A1…k)(Ak+1…n)
• The problem of determining the optimal sequence of multiplication is broken
up into 2 parts:
Q : How do we decide where to split the chain (what k)?
A : Consider all possible values of k.
Q : How do we parenthesize the subchains A1…k & Ak+1…n?
A : Solve by recursively applying the same scheme.
NOTE: this problem satisfies the “principle of optimality”.
• Next, we store the solutions to the sub-problems in a table and build the table
in a bottom-up manner.
DP Solution (II)
• For 1  i  j  n, let m[i, j] denote the minimum number
of multiplications needed to compute Ai…j .
• Example: Minimum number of multiplies for A3…7
• In terms of pi , the product A3…7 has
dimensions ____.
DP Solution (III)
• The optimal cost can be described be as follows:
– i = j  the sequence contains only 1 matrix, so m[i, j] = 0.
– i < j  This can be split by considering each k, i  k < j,
as Ai…k (pi-1 x pk ) times Ak+1…j (pk x pj).
• This suggests the following recursive rule for computing
m[i, j]:
m[i, i] = 0
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj ) for i < j
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
=
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
= Ai…k Ak+1…j (m[k+1, j] mults)
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
= Ai…k Ak+1…j (m[k+1, j] mults)
= Ai…j (pi-1 pk pj mults)
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
= Ai…k Ak+1…j (m[k+1, j] mults)
= Ai…j (pi-1 pk pj mults)
• For solution, evaluate for all k and take minimum.
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Matrix-Chain-Order(p)
1. n  length[p] - 1
2. for i  1 to n // initialization: O(n) time
3. do m[i, i]  0
4. for L  2 to n // L = length of sub-chain
5. do for i  1 to n - L+1
6. do j  i + L - 1
7. m[i, j]  
8. for k  i to j - 1
9. do q  m[i, k] + m[k+1, j] + pi-1 pk pj
10. if q < m[i, j]
11. then m[i, j]  q
12. s[i, j]  k
13. return m and s
Extracting Optimum Sequence
• Leave a split marker indicating where the best split is (i.e.
the value of k leading to minimum values of m[i, j]). We
maintain a parallel array s[i, j] in which we store the value
of k providing the optimal split.
• If s[i, j] = k, the best way to multiply the sub-chain Ai…j is
to first multiply the sub-chain Ai…k and then the sub-chain
Ak+1…j , and finally multiply them together. Intuitively s[i, j]
tells us what multiplication to perform last. We only need
to store s[i, j] if we have at least 2 matrices & j > i.
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Example: DP for CMM
• The initial set of dimensions are <5, 4, 6, 2, 7>: we are
multiplying A1 (5x4) times A2 (4x6) times A3 (6x2) times
A4 (2x7). Optimal sequence is (A1 (A2A3 )) A4.
Finding a Recursive Solution
• Figure out the “top-level” choice you
have to make (e.g., where to split the
list of matrices)
• List the options for that decision
• Each option should require smaller sub-
problems to be solved
• Recursive function is the minimum (or
max) over all the options
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Longest Common Subsequence
(LCS)
• Problem: Given sequences x[1..m] and
y[1..n], find a longest common
subsequence of both.
• Example: x=ABCBDAB and
y=BDCABA,
– BCA is a common subsequence and
– BCBA and BDAB are two LCSs
LCS
• Writing a recurrence equation
• The dynamic programming solution
Brute force solution
• Solution: For every subsequence of x,
check if it is a subsequence of y.
Writing the recurrence
equation
• Let Xi denote the ith prefix x[1..i] of x[1..m],
and
• X0 denotes an empty prefix
• We will first compute the length of an LCS of
Xm and Yn, LenLCS(m, n), and then use
information saved during the computation for
finding the actual subsequence
• We need a recursive formula for computing
LenLCS(i, j).
Writing the recurrence
equation
• If Xi and Yj end with the same character xi=yj,
the LCS must include the character. If it did
not we could get a longer LCS by adding the
common character.
• If Xi and Yj do not end with the same
character there are two possibilities:
– either the LCS does not end with xi,
– or it does not end with yj
• Let Zk denote an LCS of Xi and Yj
Xi and Yj end with xi=yj
Xi
x1 x2 … xi-1 xi
Yj
y1 y2 … yj-1 yj=xi
Zk
z1 z2…zk-1 zk =yj=xi
Zk is Zk -1 followed by zk = yj = xi where
Zk-1 is an LCS of Xi-1 and Yj -1 and
LenLCS(i, j)=LenLCS(i-1, j-1)+1
Xi and Yj end with xi  yj
Xi
x1 x2 … xi-1 xi
Yj
y1 y2 … yj-1 yj
Zk
z1 z2…zk-1 zk yj
Zk is an LCS of Xi and Yj -1
Xi
x1 x2 … xi-1 x i
Yj
yj y1 y2 …yj-1 yj
Zk
z1 z2…zk-1 zk xi
Zk is an LCS of Xi -1 and Yj
LenLCS(i, j)=max{LenLCS(i, j-1), LenLCS(i-1, j)}
The recurrence equation
lenLCS i j
i j
lenLCS i j i j x y
lenLCS i j lenLCS i j
( , )
,
( , )
max{ ( , ), ( , )}

 
  
 





0 0 0
1 1 1
1 1
if or
if , > 0 and =
otherwise
i j
The dynamic programming
solution
• Initialize the first row and the first column of
the matrix LenLCS to 0
• Calculate LenLCS (1, j) for j = 1,…, n
• Then the LenLCS (2, j) for j = 1,…, n, etc.
• Store also in a table an arrow pointing to the
array element that was used in the
computation.
Example
yj B D C A
xj 0 0 0 0 0
A 0 0 0 0 1
B 0 1 1 1 1
C 0 1 1 2 2
B 0 1 1 2 2
To find an LCS follow the arrows, for each
diagonal arrow there is a member of the LCS
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Ad

Recommended

Dynamic1
Dynamic1
MyAlome
 
Chapter 16
Chapter 16
ashish bansal
 
Learn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorith
MazenulIslamKhan
 
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
saiscount01
 
dynamic programming complete by Mumtaz Ali (03154103173)
dynamic programming complete by Mumtaz Ali (03154103173)
Mumtaz Ali
 
Chapter 5.pptx
Chapter 5.pptx
Tekle12
 
Dynamic programming
Dynamic programming
Gopi Saiteja
 
Dynamic programming
Dynamic programming
Amit Kumar Rathi
 
Dynamic programming1
Dynamic programming1
debolina13
 
Daa chapter 3
Daa chapter 3
B.Kirron Reddi
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
DynamicProgramming.pdf
DynamicProgramming.pdf
ssuser3a8f33
 
DynamicProgramming.ppt
DynamicProgramming.ppt
DavidMaina47
 
8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx
zahidulhasan32
 
Lecture11
Lecture11
Nv Thejaswini
 
Matrix chain multiplication in design analysis of algorithm
Matrix chain multiplication in design analysis of algorithm
RajKumar323561
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdf
jannatulferdousmaish
 
Learn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorith
MazenulIslamKhan
 
9 - DynamicProgramming-plus latihan.ppt
9 - DynamicProgramming-plus latihan.ppt
KerbauBakar
 
Daa chapter 2
Daa chapter 2
B.Kirron Reddi
 
Applied Algorithms and Structures week999
Applied Algorithms and Structures week999
fashiontrendzz20
 
unit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
14-dynamic-programming-work-methods.pptx
14-dynamic-programming-work-methods.pptx
r6s0069
 
Lop1
Lop1
devendragiitk
 
Design and Analysis of Algorithm-Lecture.pptx
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
Lec38
Lec38
Nikhil Chilwant
 
5.3 dynamic programming 03
5.3 dynamic programming 03
Krish_ver2
 
Dynamic Programming and Applications.ppt
Dynamic Programming and Applications.ppt
coolscools1231
 
Deep Learning for Natural Language Processing_FDP on 16 June 2025 MITS.pptx
Deep Learning for Natural Language Processing_FDP on 16 June 2025 MITS.pptx
resming1
 
Cadastral Maps
Cadastral Maps
Google
 

More Related Content

Similar to Dynamic_methods_Greedy_algorithms_11.ppt (20)

Dynamic programming1
Dynamic programming1
debolina13
 
Daa chapter 3
Daa chapter 3
B.Kirron Reddi
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
DynamicProgramming.pdf
DynamicProgramming.pdf
ssuser3a8f33
 
DynamicProgramming.ppt
DynamicProgramming.ppt
DavidMaina47
 
8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx
zahidulhasan32
 
Lecture11
Lecture11
Nv Thejaswini
 
Matrix chain multiplication in design analysis of algorithm
Matrix chain multiplication in design analysis of algorithm
RajKumar323561
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdf
jannatulferdousmaish
 
Learn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorith
MazenulIslamKhan
 
9 - DynamicProgramming-plus latihan.ppt
9 - DynamicProgramming-plus latihan.ppt
KerbauBakar
 
Daa chapter 2
Daa chapter 2
B.Kirron Reddi
 
Applied Algorithms and Structures week999
Applied Algorithms and Structures week999
fashiontrendzz20
 
unit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
14-dynamic-programming-work-methods.pptx
14-dynamic-programming-work-methods.pptx
r6s0069
 
Lop1
Lop1
devendragiitk
 
Design and Analysis of Algorithm-Lecture.pptx
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
Lec38
Lec38
Nikhil Chilwant
 
5.3 dynamic programming 03
5.3 dynamic programming 03
Krish_ver2
 
Dynamic Programming and Applications.ppt
Dynamic Programming and Applications.ppt
coolscools1231
 
Dynamic programming1
Dynamic programming1
debolina13
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
DynamicProgramming.pdf
DynamicProgramming.pdf
ssuser3a8f33
 
DynamicProgramming.ppt
DynamicProgramming.ppt
DavidMaina47
 
8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx
zahidulhasan32
 
Matrix chain multiplication in design analysis of algorithm
Matrix chain multiplication in design analysis of algorithm
RajKumar323561
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdf
jannatulferdousmaish
 
Learn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorith
MazenulIslamKhan
 
9 - DynamicProgramming-plus latihan.ppt
9 - DynamicProgramming-plus latihan.ppt
KerbauBakar
 
Applied Algorithms and Structures week999
Applied Algorithms and Structures week999
fashiontrendzz20
 
unit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
14-dynamic-programming-work-methods.pptx
14-dynamic-programming-work-methods.pptx
r6s0069
 
Design and Analysis of Algorithm-Lecture.pptx
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
5.3 dynamic programming 03
5.3 dynamic programming 03
Krish_ver2
 
Dynamic Programming and Applications.ppt
Dynamic Programming and Applications.ppt
coolscools1231
 

Recently uploaded (20)

Deep Learning for Natural Language Processing_FDP on 16 June 2025 MITS.pptx
Deep Learning for Natural Language Processing_FDP on 16 June 2025 MITS.pptx
resming1
 
Cadastral Maps
Cadastral Maps
Google
 
Montreal Dreamin' 25 - Introduction to the MuleSoft AI Chain (MAC) Project
Montreal Dreamin' 25 - Introduction to the MuleSoft AI Chain (MAC) Project
Alexandra N. Martinez
 
Machine Learning - Classification Algorithms
Machine Learning - Classification Algorithms
resming1
 
362 Alec Data Center Solutions-Slysium Data Center-AUH-ABB Furse.pdf
362 Alec Data Center Solutions-Slysium Data Center-AUH-ABB Furse.pdf
djiceramil
 
IPL_Logic_Flow.pdf Mainframe IPLMainframe IPL
IPL_Logic_Flow.pdf Mainframe IPLMainframe IPL
KhadijaKhadijaAouadi
 
grade 9 science q1 quiz.pptx science quiz
grade 9 science q1 quiz.pptx science quiz
norfapangolima
 
Modern multi-proposer consensus implementations
Modern multi-proposer consensus implementations
François Garillot
 
Industry 4.o the fourth revolutionWeek-2.pptx
Industry 4.o the fourth revolutionWeek-2.pptx
KNaveenKumarECE
 
Rigor, ethics, wellbeing and resilience in the ICT doctoral journey
Rigor, ethics, wellbeing and resilience in the ICT doctoral journey
Yannis
 
IntroSlides-June-GDG-Cloud-Munich community [email protected]
IntroSlides-June-GDG-Cloud-Munich community [email protected]
Luiz Carneiro
 
Structural Design for Residential-to-Restaurant Conversion
Structural Design for Residential-to-Restaurant Conversion
DanielRoman285499
 
60 Years and Beyond eBook 1234567891.pdf
60 Years and Beyond eBook 1234567891.pdf
waseemalazzeh
 
COMPOSITE COLUMN IN STEEL CONCRETE COMPOSITES.ppt
COMPOSITE COLUMN IN STEEL CONCRETE COMPOSITES.ppt
ravicivil
 
20CE601- DESIGN OF STEEL STRUCTURES ,INTRODUCTION AND ALLOWABLE STRESS DESIGN
20CE601- DESIGN OF STEEL STRUCTURES ,INTRODUCTION AND ALLOWABLE STRESS DESIGN
gowthamvicky1
 
Understanding Amplitude Modulation : A Guide
Understanding Amplitude Modulation : A Guide
CircuitDigest
 
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
djiceramil
 
Microwatt: Open Tiny Core, Big Possibilities
Microwatt: Open Tiny Core, Big Possibilities
IBM
 
chemistry investigatory project for class 12
chemistry investigatory project for class 12
Susis10
 
Fundamentals of Digital Design_Class_21st May - Copy.pptx
Fundamentals of Digital Design_Class_21st May - Copy.pptx
drdebarshi1993
 
Deep Learning for Natural Language Processing_FDP on 16 June 2025 MITS.pptx
Deep Learning for Natural Language Processing_FDP on 16 June 2025 MITS.pptx
resming1
 
Cadastral Maps
Cadastral Maps
Google
 
Montreal Dreamin' 25 - Introduction to the MuleSoft AI Chain (MAC) Project
Montreal Dreamin' 25 - Introduction to the MuleSoft AI Chain (MAC) Project
Alexandra N. Martinez
 
Machine Learning - Classification Algorithms
Machine Learning - Classification Algorithms
resming1
 
362 Alec Data Center Solutions-Slysium Data Center-AUH-ABB Furse.pdf
362 Alec Data Center Solutions-Slysium Data Center-AUH-ABB Furse.pdf
djiceramil
 
IPL_Logic_Flow.pdf Mainframe IPLMainframe IPL
IPL_Logic_Flow.pdf Mainframe IPLMainframe IPL
KhadijaKhadijaAouadi
 
grade 9 science q1 quiz.pptx science quiz
grade 9 science q1 quiz.pptx science quiz
norfapangolima
 
Modern multi-proposer consensus implementations
Modern multi-proposer consensus implementations
François Garillot
 
Industry 4.o the fourth revolutionWeek-2.pptx
Industry 4.o the fourth revolutionWeek-2.pptx
KNaveenKumarECE
 
Rigor, ethics, wellbeing and resilience in the ICT doctoral journey
Rigor, ethics, wellbeing and resilience in the ICT doctoral journey
Yannis
 
Structural Design for Residential-to-Restaurant Conversion
Structural Design for Residential-to-Restaurant Conversion
DanielRoman285499
 
60 Years and Beyond eBook 1234567891.pdf
60 Years and Beyond eBook 1234567891.pdf
waseemalazzeh
 
COMPOSITE COLUMN IN STEEL CONCRETE COMPOSITES.ppt
COMPOSITE COLUMN IN STEEL CONCRETE COMPOSITES.ppt
ravicivil
 
20CE601- DESIGN OF STEEL STRUCTURES ,INTRODUCTION AND ALLOWABLE STRESS DESIGN
20CE601- DESIGN OF STEEL STRUCTURES ,INTRODUCTION AND ALLOWABLE STRESS DESIGN
gowthamvicky1
 
Understanding Amplitude Modulation : A Guide
Understanding Amplitude Modulation : A Guide
CircuitDigest
 
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
djiceramil
 
Microwatt: Open Tiny Core, Big Possibilities
Microwatt: Open Tiny Core, Big Possibilities
IBM
 
chemistry investigatory project for class 12
chemistry investigatory project for class 12
Susis10
 
Fundamentals of Digital Design_Class_21st May - Copy.pptx
Fundamentals of Digital Design_Class_21st May - Copy.pptx
drdebarshi1993
 
Ad

Dynamic_methods_Greedy_algorithms_11.ppt

  • 1. Optimization Problems • In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may be several solutions to achieve an optimal value.) • Two common techniques: – Dynamic Programming (global) – Greedy Algorithms (local)
  • 2. Dynamic Programming • Similar to divide-and-conquer, it breaks problems down into smaller problems that are solved recursively. • In contrast, DP is applicable when the sub- problems are not independent, i.e. when sub- problems share sub-sub-problems. It solves every sub-sub-problem just once and save the results in a table to avoid duplicated computation.
  • 3. Elements of DP Algorithms • Sub-structure: decompose problem into smaller sub- problems. Express the solution of the original problem in terms of solutions for smaller problems. • Table-structure: Store the answers to the sub-problem in a table, because sub-problem solutions may be used many times. • Bottom-up computation: combine solutions on smaller sub-problems to solve larger sub-problems, and eventually arrive at a solution to the complete problem.
  • 4. Applicability to Optimization Problems • Optimal sub-structure (principle of optimality): for the global problem to be solved optimally, each sub-problem should be solved optimally. This is often violated due to sub-problem overlaps. Often by being “less optimal” on one problem, we may make a big savings on another sub-problem. • Overlapping of sub-problems: Many NP-hard problems can be formulated as DP problems, but these formulations are not efficient, because the number of sub-problems is exponentially large. Ideally, the number of sub-problems should be at most a polynomial number.
  • 5. Optimized Chain Operations • Determine the optimal sequence for performing a series of operations. (the general class of the problem is important in compiler design for code optimization & in databases for query optimization) • For example: given a series of matrices: A1…An , we can “parenthesize” this expression however we like, since matrix multiplication is associative (but not commutative). • Multiply a p x q matrix A times a q x r matrix B, the result will be a p x r matrix C. (# of columns of A must be equal to # of rows of B.)
  • 6. Matrix Multiplication • In particular for 1  i  p and 1  j  r, C[i, j] = k = 1 to q A[i, k] B[k, j] • Observe that there are pr total entries in C and each takes O(q) time to compute, thus the total time to multiply 2 matrices is pqr.
  • 7. Chain Matrix Multiplication • Given a sequence of matrices A1 A2…An , and dimensions p0 p1…pn where Ai is of dimension pi-1 x pi , determine multiplication sequence that minimizes the number of operations. • This algorithm does not perform the multiplication, it just figures out the best order in which to perform the multiplication.
  • 8. Example: CMM • Consider 3 matrices: A1 be 5 x 4, A2 be 4 x 6, and A3 be 6 x 2. Mult[((A1 A2)A3)] = (5x4x6) + (5x6x2) = 180 Mult[(A1 (A2A3 ))] = (4x6x2) + (5x4x2) = 88 Even for this small example, considerable savings can be achieved by reordering the evaluation sequence.
  • 9. DP Solution (I) • Let Ai…j be the product of matrices i through j. Ai…j is a pi-1 x pj matrix. At the highest level, we are multiplying two matrices together. That is, for any k, 1  k  n-1, A1…n = (A1…k)(Ak+1…n) • The problem of determining the optimal sequence of multiplication is broken up into 2 parts: Q : How do we decide where to split the chain (what k)? A : Consider all possible values of k. Q : How do we parenthesize the subchains A1…k & Ak+1…n? A : Solve by recursively applying the same scheme. NOTE: this problem satisfies the “principle of optimality”. • Next, we store the solutions to the sub-problems in a table and build the table in a bottom-up manner.
  • 10. DP Solution (II) • For 1  i  j  n, let m[i, j] denote the minimum number of multiplications needed to compute Ai…j . • Example: Minimum number of multiplies for A3…7 • In terms of pi , the product A3…7 has dimensions ____.
  • 11. DP Solution (III) • The optimal cost can be described be as follows: – i = j  the sequence contains only 1 matrix, so m[i, j] = 0. – i < j  This can be split by considering each k, i  k < j, as Ai…k (pi-1 x pk ) times Ak+1…j (pk x pj). • This suggests the following recursive rule for computing m[i, j]: m[i, i] = 0 m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj ) for i < j
  • 12. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 13. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 14. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) = Ai…k Ak+1…j (m[k+1, j] mults) m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 15. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) = Ai…k Ak+1…j (m[k+1, j] mults) = Ai…j (pi-1 pk pj mults) m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 16. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) = Ai…k Ak+1…j (m[k+1, j] mults) = Ai…j (pi-1 pk pj mults) • For solution, evaluate for all k and take minimum. m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 17. Matrix-Chain-Order(p) 1. n  length[p] - 1 2. for i  1 to n // initialization: O(n) time 3. do m[i, i]  0 4. for L  2 to n // L = length of sub-chain 5. do for i  1 to n - L+1 6. do j  i + L - 1 7. m[i, j]   8. for k  i to j - 1 9. do q  m[i, k] + m[k+1, j] + pi-1 pk pj 10. if q < m[i, j] 11. then m[i, j]  q 12. s[i, j]  k 13. return m and s
  • 18. Extracting Optimum Sequence • Leave a split marker indicating where the best split is (i.e. the value of k leading to minimum values of m[i, j]). We maintain a parallel array s[i, j] in which we store the value of k providing the optimal split. • If s[i, j] = k, the best way to multiply the sub-chain Ai…j is to first multiply the sub-chain Ai…k and then the sub-chain Ak+1…j , and finally multiply them together. Intuitively s[i, j] tells us what multiplication to perform last. We only need to store s[i, j] if we have at least 2 matrices & j > i.
  • 21. Example: DP for CMM • The initial set of dimensions are <5, 4, 6, 2, 7>: we are multiplying A1 (5x4) times A2 (4x6) times A3 (6x2) times A4 (2x7). Optimal sequence is (A1 (A2A3 )) A4.
  • 22. Finding a Recursive Solution • Figure out the “top-level” choice you have to make (e.g., where to split the list of matrices) • List the options for that decision • Each option should require smaller sub- problems to be solved • Recursive function is the minimum (or max) over all the options m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 26. Longest Common Subsequence (LCS) • Problem: Given sequences x[1..m] and y[1..n], find a longest common subsequence of both. • Example: x=ABCBDAB and y=BDCABA, – BCA is a common subsequence and – BCBA and BDAB are two LCSs
  • 27. LCS • Writing a recurrence equation • The dynamic programming solution
  • 28. Brute force solution • Solution: For every subsequence of x, check if it is a subsequence of y.
  • 29. Writing the recurrence equation • Let Xi denote the ith prefix x[1..i] of x[1..m], and • X0 denotes an empty prefix • We will first compute the length of an LCS of Xm and Yn, LenLCS(m, n), and then use information saved during the computation for finding the actual subsequence • We need a recursive formula for computing LenLCS(i, j).
  • 30. Writing the recurrence equation • If Xi and Yj end with the same character xi=yj, the LCS must include the character. If it did not we could get a longer LCS by adding the common character. • If Xi and Yj do not end with the same character there are two possibilities: – either the LCS does not end with xi, – or it does not end with yj • Let Zk denote an LCS of Xi and Yj
  • 31. Xi and Yj end with xi=yj Xi x1 x2 … xi-1 xi Yj y1 y2 … yj-1 yj=xi Zk z1 z2…zk-1 zk =yj=xi Zk is Zk -1 followed by zk = yj = xi where Zk-1 is an LCS of Xi-1 and Yj -1 and LenLCS(i, j)=LenLCS(i-1, j-1)+1
  • 32. Xi and Yj end with xi  yj Xi x1 x2 … xi-1 xi Yj y1 y2 … yj-1 yj Zk z1 z2…zk-1 zk yj Zk is an LCS of Xi and Yj -1 Xi x1 x2 … xi-1 x i Yj yj y1 y2 …yj-1 yj Zk z1 z2…zk-1 zk xi Zk is an LCS of Xi -1 and Yj LenLCS(i, j)=max{LenLCS(i, j-1), LenLCS(i-1, j)}
  • 33. The recurrence equation lenLCS i j i j lenLCS i j i j x y lenLCS i j lenLCS i j ( , ) , ( , ) max{ ( , ), ( , )}              0 0 0 1 1 1 1 1 if or if , > 0 and = otherwise i j
  • 34. The dynamic programming solution • Initialize the first row and the first column of the matrix LenLCS to 0 • Calculate LenLCS (1, j) for j = 1,…, n • Then the LenLCS (2, j) for j = 1,…, n, etc. • Store also in a table an arrow pointing to the array element that was used in the computation.
  • 35. Example yj B D C A xj 0 0 0 0 0 A 0 0 0 0 1 B 0 1 1 1 1 C 0 1 1 2 2 B 0 1 1 2 2 To find an LCS follow the arrows, for each diagonal arrow there is a member of the LCS