Dynamic programming in Design Analysis and AlgorithmsNikunjGoyal20
Dynamic programming is an optimization approach that breaks down problems into overlapping sub-problems whose results are reused (memoization). It contrasts with divide and conquer by relying on previous outputs to optimize larger sub-problems, with two main approaches: top-down (recursive with memoization) and bottom-up (iterative with tabulation). Key applications include the Fibonacci series, knapsack problem, and Bellman-Ford algorithm for shortest paths.
Annotaed slides for dynamic programming algorithmjohnathangamal27
The document discusses the analysis and design of algorithms, focusing on dynamic programming as a strategy for solving optimization problems efficiently. It outlines key concepts, such as optimal substructure and overlapping subproblems, and provides examples like the Fibonacci sequence and longest common subsequence. The material also highlights algorithm design techniques, comparisons between recursive and dynamic programming solutions, and factors influencing time-space trade-offs.
dynamic-programming unit 3 power point presentationShrinivasa6
Dynamic programming is an algorithm design method that solves problems as a sequence of decisions, using previous solutions to build up to the desired result. The document discusses various examples, including coin denominations, Fibonacci numbers, binomial coefficients, and the 0-1 knapsack problem, illustrating how dynamic programming can optimize solutions to complex problems. It emphasizes the principle of optimality, which states that optimal solutions to a problem encompass optimal solutions to subproblems.
Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It has two approaches: bottom-up, which solves subproblems first before combining their solutions, and top-down (memoization), which caches results of functions to avoid repeating calculations. Examples covered include calculating the Fibonacci sequence, the 0-1 knapsack problem modeled with a dynamic programming matrix, and the coin change problem solved by considering all combinations of previous amounts.
This document provides an overview of dynamic programming. It begins by explaining that dynamic programming is a technique for solving optimization problems by breaking them down into overlapping subproblems and storing the results of solved subproblems in a table to avoid recomputing them. It then provides examples of problems that can be solved using dynamic programming, including Fibonacci numbers, binomial coefficients, shortest paths, and optimal binary search trees. The key aspects of dynamic programming algorithms, including defining subproblems and combining their solutions, are also outlined.
Dynamic programming is a powerful technique for solving optimization problems by breaking them down into overlapping subproblems. It works by storing solutions to already solved subproblems and building up to a solution for the overall problem. Three key aspects are defining the subproblems, writing the recurrence relation, and solving base cases to build up solutions bottom-up rather than top-down. The principle of optimality must also hold for a problem to be suitable for a dynamic programming approach. Examples discussed include shortest paths, coin change, knapsack problems, and calculating Fibonacci numbers.
This document discusses dynamic programming and provides examples to illustrate the technique. It begins by defining dynamic programming as a bottom-up approach to problem solving where solutions to smaller subproblems are stored and built upon to solve larger problems. It then provides examples of dynamic programming algorithms for calculating Fibonacci numbers, binomial coefficients, and finding shortest paths using Floyd's algorithm. The key aspects of dynamic programming like avoiding recomputing solutions and storing intermediate results in tables are emphasized.
Dynamic programming is a technique that breaks problems into subproblems and saves results to optimize solutions without recomputing subproblems. It is commonly used in computer science, mathematics, economics, and operations research for problems like the Fibonacci series, knapsack problem, and traveling salesman problem. Dynamic programming improves efficiency by storing subproblem solutions and avoiding redundant calculations. It can find optimal and approximate solutions to large problems. For the Fibonacci series, a dynamic programming approach builds up the sequence by calculating each term from the previous two terms rather than recursively calculating all subproblems.
Module 2ppt.pptx divid and conquer methodJyoReddy9
This document discusses dynamic programming and provides examples of problems that can be solved using dynamic programming. It covers the following key points:
- Dynamic programming can be used to solve problems that exhibit optimal substructure and overlapping subproblems. It works by breaking problems down into subproblems and storing the results of subproblems to avoid recomputing them.
- Examples of problems discussed include matrix chain multiplication, all pairs shortest path, optimal binary search trees, 0/1 knapsack problem, traveling salesperson problem, and flow shop scheduling.
- The document provides pseudocode for algorithms to solve matrix chain multiplication and optimal binary search trees using dynamic programming. It also explains the basic steps and principles of dynamic programming algorithm design
The document presents an overview of dynamic programming, highlighting its significance in solving optimization problems through a bottom-up approach. Key characteristics include optimal substructure and overlapping subproblems, illustrated with examples like the Fibonacci series and shortest path problems. It also explains techniques for storing results in memory, specifically memoization and tabulation, along with performance comparisons in calculating Fibonacci numbers.
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...Simplilearn
The document introduces dynamic programming as an algorithmic paradigm for solving complex problems by breaking them into subproblems and memorizing their outcomes. It explains key concepts such as optimal substructure and overlapping subproblems, illustrated using the Fibonacci series, and describes two strategies for implementing dynamic programming: memorization and tabulation. The document emphasizes the importance of learning from past decisions to avoid repeating mistakes and provides examples of problems suitable for dynamic programming.
W8L1 Introduction & Fibonacci Numbers part 1.pptxsakibahmed181234
The document provides an introduction to dynamic programming, particularly focusing on solving the Fibonacci numbers problem. It discusses key components of dynamic programming, including optimal substructures, overlapping subproblems, and the importance of tabular computation to avoid redundant calculations. The algorithm's structure involves characterizing optimal solutions, defining their values recursively, and constructing solutions in a bottom-up manner.
The document discusses the technique of dynamic programming. It begins with an example of using dynamic programming to compute the Fibonacci numbers more efficiently than a naive recursive solution. This involves storing previously computed values in a table to avoid recomputing them. The document then presents the problem of finding the longest increasing subsequence in an array. It defines the problem and subproblems, derives a recurrence relation, and provides both recursive and iterative memoized algorithms to solve it in quadratic time using dynamic programming.
Dynamic Programing.pptx good for understandingHUSNAINAHMAD39
Dynamic programming (DP) is a method for solving complex problems by breaking them into simpler, overlapping subproblems, and storing their results to avoid redundancy. Key principles include overlapping subproblems and optimal substructure, illustrated through examples like the Fibonacci sequence. DP can be approached using two methods: top-down (memoization) and bottom-up (tabulation), each facilitating efficient problem-solving.
The document outlines the principles of dynamic programming, emphasizing its application in optimization problems, including weighted interval scheduling, maze routing, and the traveling salesman problem. It discusses the methodology involving recursion and memoization to break down problems into manageable subproblems while also contrasting it with divide-and-conquer approaches. The document provides examples and emphasizes the importance of efficiently handling recursive calls to optimize performance.
Dynamic programming (DP) is a technique used to solve complex problems by breaking them into simpler subproblems, solving each only once, and storing their results to enhance efficiency. It can be applied through two main approaches: top-down (memoization) and bottom-up (tabulation), both allowing for optimal solutions without redundant computations. DP is utilized in various fields including optimization, computer science, and operations research.
The document discusses dynamic programming and how it can be used to calculate the 20th term of the Fibonacci sequence. Dynamic programming breaks problems down into overlapping subproblems, solves each subproblem once, and stores the results for future use. It explains that the Fibonacci sequence can be calculated recursively with each term equal to the sum of the previous two. To calculate the 20th term, dynamic programming would calculate each preceding term only once and store the results, building up the solution from previously solved subproblems until it reaches the 20th term.
This document provides an overview of dynamic programming, including examples of 1-dimensional, 2-dimensional, interval, tree, and subset dynamic programming problems. It explains the general process of solving dynamic programming problems through defining subproblems, finding recurrences relating the subproblems, and solving base cases. Specific examples covered include the longest common subsequence problem, editing strings to palindromes, tree coloring, and the traveling salesman problem.
Dynamic programming is a method for solving complex problems by breaking them into simpler subproblems, solving each only once, and storing their solutions. It is applicable in optimization problems where optimal substructure and overlapping subproblems are present, allowing for efficient solutions compared to divide-and-conquer strategies. The process involves characterizing the optimal solution structure, defining the problem recursively, computing values systematically, and sometimes constructing the optimal solution based on these computed values.
Dynamic programming is an algorithmic paradigm for solving complex problems by breaking them down into overlapping subproblems and storing their solutions to avoid redundancy. Key properties include overlapping subproblems and optimal substructure, which allow for problems like Fibonacci sequences, longest increasing subsequences, longest common subsequences, and more to be solved efficiently. Techniques such as memoization and tabulation are used to store solutions, significantly improving performance over naive recursive methods.
Dynamic programming is a technique for solving complex problems by breaking them down into simpler sub-problems. It involves storing solutions to sub-problems for later use, avoiding recomputing them. Examples where it can be applied include matrix chain multiplication and calculating Fibonacci numbers. For matrix chains, dynamic programming finds the optimal order for multiplying matrices with minimum computations. For Fibonacci numbers, it calculates values in linear time by storing previous solutions rather than exponentially recomputing them through recursion.
This document discusses advanced algorithm design and analysis techniques including dynamic programming, greedy algorithms, and amortized analysis. It provides examples of dynamic programming including matrix chain multiplication and longest common subsequence. Dynamic programming works by breaking problems down into overlapping subproblems and solving each subproblem only once. Greedy algorithms make locally optimal choices at each step to find a global optimum. Amortized analysis averages the costs of a sequence of operations to determine average-case performance.
The document describes several algorithms that use dynamic programming techniques. It discusses the coin changing problem, computing binomial coefficients, Floyd's algorithm for finding all-pairs shortest paths, optimal binary search trees, the knapsack problem, and multistage graphs. For each problem, it provides the key recurrence relation used to build the dynamic programming solution in a bottom-up manner, often using a table to store intermediate results. It also analyzes the time and space complexity of the different approaches.
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
Dynamic programming is an algorithm design technique for optimization problems that reduces time by increasing space usage. It works by breaking problems down into overlapping subproblems and storing the solutions to subproblems, rather than recomputing them, to build up the optimal solution. The key aspects are identifying the optimal substructure of problems and handling overlapping subproblems in a bottom-up manner using tables. Examples that can be solved with dynamic programming include the knapsack problem, shortest paths, and matrix chain multiplication.
Completed Tuesday June 10th.
An Orientation Sampler of 8 pages.
It helps to understand the text behind anything. This improves our performance and confidence.
Your training will be mixed media. Includes Rehab Intro and Meditation vods, all sold separately.
Editing our Vods & New Shop.
Retail under $30 per item. Store Fees will apply. Digital Should be low cost.
I am still editing the package. I wont be done until probably July? However; Orientation and Lecture 1 (Videos) will be available soon. Media will vary between PDF and Instruction Videos.
Thank you for attending our free workshops. Those can be used with any Reiki Yoga training package. Traditional Reiki does host rules and ethics. Its silent and within the JP Culture/Area/Training/Word of Mouth. It allows remote healing but there’s limits for practitioners and masters. We are not allowed to share certain secrets/tools. Some content is designed only for “Masters”. Some yoga are similar like the Kriya Yoga-Church (Vowed Lessons). We will review both Reiki and Yoga (Master symbols) later on. Sounds Simple but these things host Energy Power/Protection.
Imagine This package will be a supplement or upgrade for professional Reiki. You can create any style you need.
♥♥♥
•* ́ ̈ ̧.•
(Job) Tech for students: In short, high speed is essential. (Space, External Drives, virtual clouds)
Fast devices and desktops are important. Please upgrade your technology and office as needed and timely. - MIA J. Tech Dept (Timeless)
♥♥♥
•* ́ ̈ ̧.•
Copyright Disclaimer 2007-2025+: These lessons are not to be copied or revised without the
Author’s permission. These Lessons are designed Rev. Moore to instruct and guide students on the path to holistic health and wellness.
It’s about expanding your Nature Talents, gifts, even Favorite Hobbies.
♥♥♥
•* ́ ̈ ̧.•
First, Society is still stuck in the matrix. Many of the spiritual collective, say the matrix crashed. Its now collapsing. This means anything lower, darker realms, astral, and matrix are below 5D. 5D is thee trend. It’s our New Dimensional plane. However; this plane takes work ethic,
integration, and self discovery. ♥♥♥
•* ́ ̈ ̧.•
We don’t need to slave, mule, or work double shifts to fuse Reiki lol. It should blend naturally within our lifestyles. Same with Yoga. There’s no
need to use all the poses/asanas. For under a decade, my fav exercises are not asanas but Pilates. It’s all about Yoga-meditation when using Reiki. (Breaking old myths.)
Thank You for reading our Orientation Sampler. The Workshop is 14 pages on introduction. These are a joy and effortless to produce/make.
This document discusses dynamic programming and provides examples to illustrate the technique. It begins by defining dynamic programming as a bottom-up approach to problem solving where solutions to smaller subproblems are stored and built upon to solve larger problems. It then provides examples of dynamic programming algorithms for calculating Fibonacci numbers, binomial coefficients, and finding shortest paths using Floyd's algorithm. The key aspects of dynamic programming like avoiding recomputing solutions and storing intermediate results in tables are emphasized.
Dynamic programming is a technique that breaks problems into subproblems and saves results to optimize solutions without recomputing subproblems. It is commonly used in computer science, mathematics, economics, and operations research for problems like the Fibonacci series, knapsack problem, and traveling salesman problem. Dynamic programming improves efficiency by storing subproblem solutions and avoiding redundant calculations. It can find optimal and approximate solutions to large problems. For the Fibonacci series, a dynamic programming approach builds up the sequence by calculating each term from the previous two terms rather than recursively calculating all subproblems.
Module 2ppt.pptx divid and conquer methodJyoReddy9
This document discusses dynamic programming and provides examples of problems that can be solved using dynamic programming. It covers the following key points:
- Dynamic programming can be used to solve problems that exhibit optimal substructure and overlapping subproblems. It works by breaking problems down into subproblems and storing the results of subproblems to avoid recomputing them.
- Examples of problems discussed include matrix chain multiplication, all pairs shortest path, optimal binary search trees, 0/1 knapsack problem, traveling salesperson problem, and flow shop scheduling.
- The document provides pseudocode for algorithms to solve matrix chain multiplication and optimal binary search trees using dynamic programming. It also explains the basic steps and principles of dynamic programming algorithm design
The document presents an overview of dynamic programming, highlighting its significance in solving optimization problems through a bottom-up approach. Key characteristics include optimal substructure and overlapping subproblems, illustrated with examples like the Fibonacci series and shortest path problems. It also explains techniques for storing results in memory, specifically memoization and tabulation, along with performance comparisons in calculating Fibonacci numbers.
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...Simplilearn
The document introduces dynamic programming as an algorithmic paradigm for solving complex problems by breaking them into subproblems and memorizing their outcomes. It explains key concepts such as optimal substructure and overlapping subproblems, illustrated using the Fibonacci series, and describes two strategies for implementing dynamic programming: memorization and tabulation. The document emphasizes the importance of learning from past decisions to avoid repeating mistakes and provides examples of problems suitable for dynamic programming.
W8L1 Introduction & Fibonacci Numbers part 1.pptxsakibahmed181234
The document provides an introduction to dynamic programming, particularly focusing on solving the Fibonacci numbers problem. It discusses key components of dynamic programming, including optimal substructures, overlapping subproblems, and the importance of tabular computation to avoid redundant calculations. The algorithm's structure involves characterizing optimal solutions, defining their values recursively, and constructing solutions in a bottom-up manner.
The document discusses the technique of dynamic programming. It begins with an example of using dynamic programming to compute the Fibonacci numbers more efficiently than a naive recursive solution. This involves storing previously computed values in a table to avoid recomputing them. The document then presents the problem of finding the longest increasing subsequence in an array. It defines the problem and subproblems, derives a recurrence relation, and provides both recursive and iterative memoized algorithms to solve it in quadratic time using dynamic programming.
Dynamic Programing.pptx good for understandingHUSNAINAHMAD39
Dynamic programming (DP) is a method for solving complex problems by breaking them into simpler, overlapping subproblems, and storing their results to avoid redundancy. Key principles include overlapping subproblems and optimal substructure, illustrated through examples like the Fibonacci sequence. DP can be approached using two methods: top-down (memoization) and bottom-up (tabulation), each facilitating efficient problem-solving.
The document outlines the principles of dynamic programming, emphasizing its application in optimization problems, including weighted interval scheduling, maze routing, and the traveling salesman problem. It discusses the methodology involving recursion and memoization to break down problems into manageable subproblems while also contrasting it with divide-and-conquer approaches. The document provides examples and emphasizes the importance of efficiently handling recursive calls to optimize performance.
Dynamic programming (DP) is a technique used to solve complex problems by breaking them into simpler subproblems, solving each only once, and storing their results to enhance efficiency. It can be applied through two main approaches: top-down (memoization) and bottom-up (tabulation), both allowing for optimal solutions without redundant computations. DP is utilized in various fields including optimization, computer science, and operations research.
The document discusses dynamic programming and how it can be used to calculate the 20th term of the Fibonacci sequence. Dynamic programming breaks problems down into overlapping subproblems, solves each subproblem once, and stores the results for future use. It explains that the Fibonacci sequence can be calculated recursively with each term equal to the sum of the previous two. To calculate the 20th term, dynamic programming would calculate each preceding term only once and store the results, building up the solution from previously solved subproblems until it reaches the 20th term.
This document provides an overview of dynamic programming, including examples of 1-dimensional, 2-dimensional, interval, tree, and subset dynamic programming problems. It explains the general process of solving dynamic programming problems through defining subproblems, finding recurrences relating the subproblems, and solving base cases. Specific examples covered include the longest common subsequence problem, editing strings to palindromes, tree coloring, and the traveling salesman problem.
Dynamic programming is a method for solving complex problems by breaking them into simpler subproblems, solving each only once, and storing their solutions. It is applicable in optimization problems where optimal substructure and overlapping subproblems are present, allowing for efficient solutions compared to divide-and-conquer strategies. The process involves characterizing the optimal solution structure, defining the problem recursively, computing values systematically, and sometimes constructing the optimal solution based on these computed values.
Dynamic programming is an algorithmic paradigm for solving complex problems by breaking them down into overlapping subproblems and storing their solutions to avoid redundancy. Key properties include overlapping subproblems and optimal substructure, which allow for problems like Fibonacci sequences, longest increasing subsequences, longest common subsequences, and more to be solved efficiently. Techniques such as memoization and tabulation are used to store solutions, significantly improving performance over naive recursive methods.
Dynamic programming is a technique for solving complex problems by breaking them down into simpler sub-problems. It involves storing solutions to sub-problems for later use, avoiding recomputing them. Examples where it can be applied include matrix chain multiplication and calculating Fibonacci numbers. For matrix chains, dynamic programming finds the optimal order for multiplying matrices with minimum computations. For Fibonacci numbers, it calculates values in linear time by storing previous solutions rather than exponentially recomputing them through recursion.
This document discusses advanced algorithm design and analysis techniques including dynamic programming, greedy algorithms, and amortized analysis. It provides examples of dynamic programming including matrix chain multiplication and longest common subsequence. Dynamic programming works by breaking problems down into overlapping subproblems and solving each subproblem only once. Greedy algorithms make locally optimal choices at each step to find a global optimum. Amortized analysis averages the costs of a sequence of operations to determine average-case performance.
The document describes several algorithms that use dynamic programming techniques. It discusses the coin changing problem, computing binomial coefficients, Floyd's algorithm for finding all-pairs shortest paths, optimal binary search trees, the knapsack problem, and multistage graphs. For each problem, it provides the key recurrence relation used to build the dynamic programming solution in a bottom-up manner, often using a table to store intermediate results. It also analyzes the time and space complexity of the different approaches.
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
Dynamic programming is an algorithm design technique for optimization problems that reduces time by increasing space usage. It works by breaking problems down into overlapping subproblems and storing the solutions to subproblems, rather than recomputing them, to build up the optimal solution. The key aspects are identifying the optimal substructure of problems and handling overlapping subproblems in a bottom-up manner using tables. Examples that can be solved with dynamic programming include the knapsack problem, shortest paths, and matrix chain multiplication.
Completed Tuesday June 10th.
An Orientation Sampler of 8 pages.
It helps to understand the text behind anything. This improves our performance and confidence.
Your training will be mixed media. Includes Rehab Intro and Meditation vods, all sold separately.
Editing our Vods & New Shop.
Retail under $30 per item. Store Fees will apply. Digital Should be low cost.
I am still editing the package. I wont be done until probably July? However; Orientation and Lecture 1 (Videos) will be available soon. Media will vary between PDF and Instruction Videos.
Thank you for attending our free workshops. Those can be used with any Reiki Yoga training package. Traditional Reiki does host rules and ethics. Its silent and within the JP Culture/Area/Training/Word of Mouth. It allows remote healing but there’s limits for practitioners and masters. We are not allowed to share certain secrets/tools. Some content is designed only for “Masters”. Some yoga are similar like the Kriya Yoga-Church (Vowed Lessons). We will review both Reiki and Yoga (Master symbols) later on. Sounds Simple but these things host Energy Power/Protection.
Imagine This package will be a supplement or upgrade for professional Reiki. You can create any style you need.
♥♥♥
•* ́ ̈ ̧.•
(Job) Tech for students: In short, high speed is essential. (Space, External Drives, virtual clouds)
Fast devices and desktops are important. Please upgrade your technology and office as needed and timely. - MIA J. Tech Dept (Timeless)
♥♥♥
•* ́ ̈ ̧.•
Copyright Disclaimer 2007-2025+: These lessons are not to be copied or revised without the
Author’s permission. These Lessons are designed Rev. Moore to instruct and guide students on the path to holistic health and wellness.
It’s about expanding your Nature Talents, gifts, even Favorite Hobbies.
♥♥♥
•* ́ ̈ ̧.•
First, Society is still stuck in the matrix. Many of the spiritual collective, say the matrix crashed. Its now collapsing. This means anything lower, darker realms, astral, and matrix are below 5D. 5D is thee trend. It’s our New Dimensional plane. However; this plane takes work ethic,
integration, and self discovery. ♥♥♥
•* ́ ̈ ̧.•
We don’t need to slave, mule, or work double shifts to fuse Reiki lol. It should blend naturally within our lifestyles. Same with Yoga. There’s no
need to use all the poses/asanas. For under a decade, my fav exercises are not asanas but Pilates. It’s all about Yoga-meditation when using Reiki. (Breaking old myths.)
Thank You for reading our Orientation Sampler. The Workshop is 14 pages on introduction. These are a joy and effortless to produce/make.
Non-Communicable Diseases and National Health Programs – Unit 10 | B.Sc Nursi...RAKESH SAJJAN
This PowerPoint presentation is prepared for Unit 10 – Non-Communicable Diseases and National Health Programs, as per the 5th Semester B.Sc Nursing syllabus outlined by the Indian Nursing Council (INC) under the subject Community Health Nursing – I.
This unit focuses on equipping students with knowledge of the causes, prevention, and control of non-communicable diseases (NCDs), which are a major public health challenge in India. The presentation emphasizes the nurse’s role in early detection, screening, management, and referral services under national-level programs.
🔹 Key Topics Included:
Definition, burden, and impact of NCDs in India
Epidemiology, risk factors, signs/symptoms, prevention, and management of:
Diabetes Mellitus
Hypertension
Cardiovascular Diseases
Stroke & Obesity
Thyroid Disorders
Blindness
Deafness
Injuries and Accidents (incl. road traffic injuries and trauma guidelines)
NCD-2 Cancers:
Breast Cancer
Cervical Cancer
Oral Cancer
Risk factors, screening, diagnosis, early signs, referral & palliative care
Role of nurse in screening, referral, counseling, and continuum of care
National Programs:
National Program for Prevention and Control of Cancer, Diabetes, Cardiovascular Diseases and Stroke (NPCDCS)
National Program for Control of Blindness
National Program for Prevention and Control of Deafness
National Tobacco Control Program (NTCP)
Introduction to Universal Health Coverage and Ayushman Bharat
Use of standard treatment protocols and referral flowcharts
This presentation is ideal for:
Classroom lectures, field assignments, health education planning, and student projects
Preparing for university exams, class tests, and community field postings
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecdrazelitouali
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecd
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecd
Energy Balances Of Oecd Countries 2011 Iea Statistics 1st Edition Oecd
ROLE PLAY: FIRST AID -CPR & RECOVERY POSITION.pptxBelicia R.S
Role play : First Aid- CPR, Recovery position and Hand hygiene.
Scene 1: Three friends are shopping in a mall
Scene 2: One of the friend becomes victim to electric shock.
Scene 3: Arrival of a first aider
Steps:
Safety First
Evaluate the victim‘s condition
Call for help
Perform CPR- Secure an open airway, Chest compression, Recuse breaths.
Put the victim in Recovery position if unconscious and breathing normally.
Slides from a Capitol Technology University presentation covering doctoral programs offered by the university. All programs are online, and regionally accredited. The presentation covers degree program details, tuition, financial aid and the application process.
Ray Dalio How Countries go Broke the Big CycleDadang Solihin
A complete and practical understanding of the Big Debt Cycle. A much more practical understanding of how supply and demand really work compared to the conventional economic thinking. A complete and practical understanding of the Overall Big Cycle, which is driven by the Big Debt Cycle and the other major cycles, including the big political cycle within countries that changes political orders and the big geopolitical cycle that changes world orders.
How to Create an Event in Odoo 18 - Odoo 18 SlidesCeline George
Creating an event in Odoo 18 is a straightforward process that allows you to manage various aspects of your event efficiently.
Odoo 18 Events Module is a powerful tool for organizing and managing events of all sizes, from conferences and workshops to webinars and meetups.
Paper 108 | Thoreau’s Influence on Gandhi: The Evolution of Civil DisobedienceRajdeep Bavaliya
Dive into the powerful journey from Thoreau’s 19th‑century essay to Gandhi’s mass movement, and discover how one man’s moral stand became the backbone of nonviolent resistance worldwide. Learn how conscience met strategy to spark revolutions, and why their legacy still inspires today’s social justice warriors. Uncover the evolution of civil disobedience. Don’t forget to like, share, and follow for more deep dives into the ideas that changed the world.
M.A. Sem - 2 | Presentation
Presentation Season - 2
Paper - 108: The American Literature
Submitted Date: April 2, 2025
Paper Name: The American Literature
Topic: Thoreau’s Influence on Gandhi: The Evolution of Civil Disobedience
[Please copy the link and paste it into any web browser to access the content.]
Video Link: https://youtu.be/HXeq6utg7iQ
For a more in-depth discussion of this presentation, please visit the full blog post at the following link: https://rajdeepbavaliya2.blogspot.com/2025/04/thoreau-s-influence-on-gandhi-the-evolution-of-civil-disobedience.html
Please visit this blog to explore additional presentations from this season:
Hashtags:
#CivilDisobedience #ThoreauToGandhi #NonviolentResistance #Satyagraha #Transcendentalism #SocialJustice #HistoryUncovered #GandhiLegacy #ThoreauInfluence #PeacefulProtest
Keyword Tags:
civil disobedience, Thoreau, Gandhi, Satyagraha, nonviolent protest, transcendentalism, moral resistance, Gandhi Thoreau connection, social change, political philosophy
Assisting Individuals and Families to Promote and Maintain Health – Unit 7 | ...RAKESH SAJJAN
This PowerPoint presentation is based on Unit 7 – Assisting Individuals and Families to Promote and Maintain Their Health, a core topic in Community Health Nursing – I for 5th Semester B.Sc Nursing students, as per the Indian Nursing Council (INC) guidelines.
The unit emphasizes the nurse’s role in family-centered care, early detection of health problems, health promotion, and appropriate referrals, especially in the context of home visits and community outreach. It also strengthens the student’s understanding of nursing responsibilities in real-life community settings.
📘 Key Topics Covered in the Presentation:
Introduction to family health care: needs, principles, and objectives
Assessment of health needs of individuals, families, and groups
Observation and documentation during home visits and field assessments
Identifying risk factors: environmental, behavioral, genetic, and social
Conducting growth and development monitoring in infants and children
Recording and observing:
Milestones of development
Menstrual health and reproductive cycle
Temperature, blood pressure, and vital signs
General physical appearance and personal hygiene
Social assessment: understanding family dynamics, occupation, income, living conditions
Health education and counseling for individuals and families
Guidelines for early detection and referral of communicable and non-communicable diseases
Maintenance of family health records and individual health cards
Assisting families with:
Maternal and child care
Elderly and chronic disease management
Hygiene and nutrition guidance
Utilization of community resources – referral linkages, support services, and local health programs
Role of nurse in coordinating care, advocating for vulnerable individuals, and empowering families
Promoting self-care and family participation in disease prevention and health maintenance
This presentation is highly useful for:
Nursing students preparing for internal exams, university theory papers, or community postings
Health educators conducting family teaching sessions
Students conducting fieldwork and project work during community postings
Public health nurses and outreach workers dealing with preventive, promotive, and rehabilitative care
It’s structured in a step-by-step format, featuring tables, case examples, and simplified explanations tailored for easy understanding and classroom delivery.
How to Manage Inventory Movement in Odoo 18 POSCeline George
Inventory management in the Odoo 18 Point of Sale system is tightly integrated with the inventory module, offering a solution to businesses to manage sales and stock in one united system.
"Geography Study Material for Class 10th" provides a comprehensive and easy-to-understand resource for key topics like Resources & Development, Water Resources, Agriculture, Minerals & Energy, Manufacturing Industries, and Lifelines of the National Economy. Designed as per the latest NCERT/JKBOSE syllabus, it includes notes, maps, diagrams, and MODEL question Paper to help students excel in exams. Whether revising for exams or strengthening conceptual clarity, this material ensures effective learning and high scores. Perfect for last-minute revisions and structured study sessions.
Dynamic Programming: Memoization, Introduction to ALgorithms
1. Introduction
CSE 221: Algorithms
Dynamic Programming
Mumit Khan
Fatema Tuz Zohora
Computer Science and Engineering
BRAC University
References
1 Jon Kleinberg and Éva Tardos, Algorithm Design. Pearson Education, 2006.
2 T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, Second Edition.
The MIT Press, September 2001.
Last modified: November 27, 2012
This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Licensed under CSE 221: Algorithms 1 / 53
2. Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 2 / 53
3. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Licensed under CSE 221: Algorithms 3 / 53
4. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Licensed under CSE 221: Algorithms 3 / 53
5. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Licensed under CSE 221: Algorithms 3 / 53
6. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Unlike Greedy algorithms, implicitly solve all subproblems.
Licensed under CSE 221: Algorithms 3 / 53
7. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Unlike Greedy algorithms, implicitly solve all subproblems.
Motivating the case for DP with Memoization – a top-down
technique, and then moving on to Dynamic Programming – a
bottom-up technique.
Licensed under CSE 221: Algorithms 3 / 53
8. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming (DP)
Build up the solution by computing solutions to the
subproblems.
Don’t solve the same subproblem twice, but rather save the
solution so it can be re-used later on.
Often used for a large class to optimization problems.
Unlike Greedy algorithms, implicitly solve all subproblems.
Motivating the case for DP with Memoization – a top-down
technique, and then moving on to Dynamic Programming – a
bottom-up technique.
Greedy is evil, Dynamic Programming is good. – Prof. Jeff
Erickson, University of Illinois, Urbana-Champaign.
Licensed under CSE 221: Algorithms 3 / 53
9. Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 4 / 53
10. Introduction Memoization Dynamic programming Weighted interval sched
Recursive solution to Fibonacci numbers
Definition (Fibonacci numbers)
The Fibonacci numbers are given by the following sequence:
h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i
Licensed under CSE 221: Algorithms 5 / 53
11. Introduction Memoization Dynamic programming Weighted interval sched
Recursive solution to Fibonacci numbers
Definition (Fibonacci numbers)
The Fibonacci numbers are given by the following sequence:
h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i
and described by the following recurrence.
Fib(n) =
(
n if n = 0 or 1
Fib(n − 1) + Fib(n − 2) if n ≥ 2
Licensed under CSE 221: Algorithms 5 / 53
12. Introduction Memoization Dynamic programming Weighted interval sched
Recursive solution to Fibonacci numbers
Definition (Fibonacci numbers)
The Fibonacci numbers are given by the following sequence:
h0, 1, 1, 2, 3, 5, 8, 21, 34, 55, 89, . . .i
and described by the following recurrence.
Fib(n) =
(
n if n = 0 or 1
Fib(n − 1) + Fib(n − 2) if n ≥ 2
Straightforward recursive algorithm
Fibonacci(n) n ≥ 0
1 if n = 0 or n = 1
2 then return n
3 else return fibonacci(n − 1) + fibonacci(n − 2)
Licensed under CSE 221: Algorithms 5 / 53
17. Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Complexity
This recursive algorithm for Fibonacci numbers has exponential
running time!
Licensed under CSE 221: Algorithms 6 / 53
18. Introduction Memoization Dynamic programming Weighted interval sched
Recursion tree
Complexity
This recursive algorithm for Fibonacci numbers has exponential
running time!
To be precise, T(n) = O(ϕn) , where ϕ = 1+
√
5
2 is the golden
ratio.
Licensed under CSE 221: Algorithms 6 / 53
19. Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Note how fib(n − 2) and fib(n − 3) are each being computed
twice.
Licensed under CSE 221: Algorithms 7 / 53
20. Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
In fact, computing fib(n − 2) involves computing a whole
subtree.
Licensed under CSE 221: Algorithms 7 / 53
23. Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)?
Licensed under CSE 221: Algorithms 7 / 53
24. Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)? fib(n − 3)?
Licensed under CSE 221: Algorithms 7 / 53
25. Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)? fib(n − 3)?
What if we compute and save the result of fib(i) for i = {2, 3, . . , n} the
first time, and then re-use it each time afterward?
Licensed under CSE 221: Algorithms 7 / 53
26. Introduction Memoization Dynamic programming Weighted interval sched
Redundant computations
Observations
Spectacular redundancy in computation – how many times are we
computing fib(n − 2)? fib(n − 3)?
What if we compute and save the result of fib(i) for i = {2, 3, . . , n} the
first time, and then re-use it each time afterward?
Ah, we’ve just (re)discovered Memo(r)ization!
Licensed under CSE 221: Algorithms 7 / 53
27. Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Licensed under CSE 221: Algorithms 8 / 53
28. Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
Licensed under CSE 221: Algorithms 8 / 53
29. Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
1 At each step of computation, first see if the solution to the
subproblem has already been found and saved.
Licensed under CSE 221: Algorithms 8 / 53
30. Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
1 At each step of computation, first see if the solution to the
subproblem has already been found and saved.
2 If so, simply return the solution.
Licensed under CSE 221: Algorithms 8 / 53
31. Introduction Memoization Dynamic programming Weighted interval sched
Memoization
Definition (Memoization)
The process of saving solutions to subproblems that can be re-used
later without redundant computations.
Basic idea
Typically, the solutions to subproblems (i.e., the intermediate
solutions) are saved in a global array, which are later looked up and
re-used as needed.
1 At each step of computation, first see if the solution to the
subproblem has already been found and saved.
2 If so, simply return the solution.
3 If not, compute the solution, and save it before returning the
solution.
Licensed under CSE 221: Algorithms 8 / 53
32. Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n) n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n Our base conditions.
3 if F[n] is empty No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Licensed under CSE 221: Algorithms 9 / 53
33. Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n) n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n Our base conditions.
3 if F[n] is empty No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F?
Licensed under CSE 221: Algorithms 9 / 53
34. Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n) n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n Our base conditions.
3 if F[n] is empty No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F? It’s used store the values of the
intermediate results, and must be initialized by the caller to
all empty.
Licensed under CSE 221: Algorithms 9 / 53
35. Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n) n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n Our base conditions.
3 if F[n] is empty No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F? It’s used store the values of the
intermediate results, and must be initialized by the caller to
all empty.
What is an appropriate sentinel to indicate that
F[i], 0 ≤ i ≤ n has not been solved yet (i.e., empty)?
Licensed under CSE 221: Algorithms 9 / 53
36. Introduction Memoization Dynamic programming Weighted interval sched
Memoized recursive algorithm for Fibonacci numbers
M-Fibonacci(n) n ≥ 0, global F = [0 . . n]
1 if n = 0 or n = 1
2 then return n Our base conditions.
3 if F[n] is empty No saved solution found for n.
4 then F[n] ← m-fibonacci(n − 1) + m-fibonacci(n − 2)
5 return F[n]
Questions
What is this global array F? It’s used store the values of the
intermediate results, and must be initialized by the caller to
all empty.
What is an appropriate sentinel to indicate that
F[i], 0 ≤ i ≤ n has not been solved yet (i.e., empty)? Use −1,
which is guaranteed to be an invalid value.
Licensed under CSE 221: Algorithms 9 / 53
37. Introduction Memoization Dynamic programming Weighted interval sched
Memoized . . . Fibonacci numbers (continued)
Fibonacci(n) n ≥ 0
Allocate an array F[0 . . n] to save results (length[F] = n + 1).
1 for i ← 0 to n
2 do F[i] ← −1 No solution computed for i yet (sentinel)
3 return m-fibonacci(F, n)
Licensed under CSE 221: Algorithms 10 / 53
38. Introduction Memoization Dynamic programming Weighted interval sched
Memoized . . . Fibonacci numbers (continued)
Fibonacci(n) n ≥ 0
Allocate an array F[0 . . n] to save results (length[F] = n + 1).
1 for i ← 0 to n
2 do F[i] ← −1 No solution computed for i yet (sentinel)
3 return m-fibonacci(F, n)
M-Fibonacci(F, n) n ≥ 0, F = [0 . . n]
1 if n ≤ 1
2 then return n
3 if F[n] = −1 No saved solution found for n.
4 then F[n] ← m-fibonacci(F, n − 1) + m-fibonacci(F, n − 2)
5 return F[n]
Licensed under CSE 221: Algorithms 10 / 53
39. Introduction Memoization Dynamic programming Weighted interval sched
Memoized . . . Fibonacci numbers (continued)
Fibonacci(n) n ≥ 0
Allocate an array F[0 . . n] to save results (length[F] = n + 1).
1 for i ← 0 to n
2 do F[i] ← −1 No solution computed for i yet (sentinel)
3 return m-fibonacci(F, n)
M-Fibonacci(F, n) n ≥ 0, F = [0 . . n]
1 if n ≤ 1
2 then return n
3 if F[n] = −1 No saved solution found for n.
4 then F[n] ← m-fibonacci(F, n − 1) + m-fibonacci(F, n − 2)
5 return F[n]
Running time
Each element F[2] . . . F[n] is filled in just once in Θ(1) time, so
T(n) = Θ(n) .
Licensed under CSE 221: Algorithms 10 / 53
40. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Licensed under CSE 221: Algorithms 11 / 53
41. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Licensed under CSE 221: Algorithms 11 / 53
42. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Licensed under CSE 221: Algorithms 11 / 53
43. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Licensed under CSE 221: Algorithms 11 / 53
44. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
Licensed under CSE 221: Algorithms 11 / 53
45. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
What are the drawbacks, if any, of memoization?
Licensed under CSE 221: Algorithms 11 / 53
46. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
What are the drawbacks, if any, of memoization?
Would all recursive algorithms benefit from memoization?
Licensed under CSE 221: Algorithms 11 / 53
47. Introduction Memoization Dynamic programming Weighted interval sched
Memoization highlights
Idea is to re-use saved solutions, trading off space for time.
Any recursive algorithm can be memoized, but only helps if
there is redundancy in computing solutions to subproblems (in
other words, if there are overlapping subproblems).
Any recursive algorithm where redundant solutions are
computed, Memoization is an appropriate solution.
Often called Top-down Dynamic Programming.
Questions to ask (and remember)
What are the drawbacks, if any, of memoization?
Would all recursive algorithms benefit from memoization?
For example, would the recursive algorithm to compute the
factorial of a number benefit from memoization?
Licensed under CSE 221: Algorithms 11 / 53
48. Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 12 / 53
49. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Licensed under CSE 221: Algorithms 13 / 53
50. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
Licensed under CSE 221: Algorithms 13 / 53
51. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
This bottom up construction gives us the first Dynamic
Programming algorithm.
Licensed under CSE 221: Algorithms 13 / 53
52. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
This bottom up construction gives us the first Dynamic
Programming algorithm.
Dynamic programming algorithm for fibonacci numbers
Fibonacci(n) n ≥ 0
1 F[0] ← 0
2 F[1] ← 1
3 for i ← 2 to n
4 do F[i] ← F[i − 1] + F[i − 2]
5 return F[n]
Licensed under CSE 221: Algorithms 13 / 53
53. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming
Note how the recursive algorithm computes the Fibonacci
number n top down by computing (and saving) solutions for
smaller values.
Idea: why not build up the solution bottom-up, starting from
the base case(s) all the way to n?
This bottom up construction gives us the first Dynamic
Programming algorithm.
Dynamic programming algorithm for fibonacci numbers
Fibonacci(n) n ≥ 0
1 F[0] ← 0
2 F[1] ← 1
3 for i ← 2 to n
4 do F[i] ← F[i − 1] + F[i − 2]
5 return F[n]
T(n) = Θ(n)
Licensed under CSE 221: Algorithms 13 / 53
54. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively.
Licensed under CSE 221: Algorithms 14 / 53
55. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
Licensed under CSE 221: Algorithms 14 / 53
56. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Licensed under CSE 221: Algorithms 14 / 53
57. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Licensed under CSE 221: Algorithms 14 / 53
58. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Observations
1 Must ensure that the recurrence is correct of course!
Licensed under CSE 221: Algorithms 14 / 53
59. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Observations
1 Must ensure that the recurrence is correct of course!
2 Need a “place” to store the solutions to subproblems, and
need to look these solutions up when needed.
Licensed under CSE 221: Algorithms 14 / 53
60. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic programming (continued)
The pattern
1 Formulate the problem recursively. Write a formula for the
whole problem as a simple combination of of the answers to
smaller subproblems.
2 Build solutions to the recurrence from the bottom up.
Write an algorithm that starts with the base case, and works
its way up to the final solution by considering the subproblems
in the correct order.
Observations
1 Must ensure that the recurrence is correct of course!
2 Need a “place” to store the solutions to subproblems, and
need to look these solutions up when needed. Typically, but
not always, a multi-dimensional table is used as storage.
Licensed under CSE 221: Algorithms 14 / 53
61. Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 15 / 53
62. Introduction Memoization Dynamic programming Weighted interval sched
Weighted interval scheduling problem
Definition (Weighted interval scheduling problem)
Given a set of schedules I = {Ii }, with associated weights
W = {wi }, find A ⊆ I such that the members of A are
non-conflicting and the total weight
P
i∈A wi is maximized.
Example (an instance of weighted interval problem)
|A| =???,
P
i∈A wi =???.
Licensed under CSE 221: Algorithms 16 / 53
63. Introduction Memoization Dynamic programming Weighted interval sched
Weighted interval scheduling problem
Definition (Weighted interval scheduling problem)
Given a set of schedules I = {Ii }, with associated weights
W = {wi }, find A ⊆ I such that the members of A are
non-conflicting and the total weight
P
i∈A wi is maximized.
Example (using an optimal strategy)
|A| = 1,
P
i∈A wi = 3.
Licensed under CSE 221: Algorithms 16 / 53
64. Introduction Memoization Dynamic programming Weighted interval sched
Weighted interval scheduling problem
Definition (Weighted interval scheduling problem)
Given a set of schedules I = {Ii }, with associated weights
W = {wi }, find A ⊆ I such that the members of A are
non-conflicting and the total weight
P
i∈A wi is maximized.
Example (using an optimal strategy)
|A| = 1,
P
i∈A wi = 3.
What now?
First step is to formulate a recursive solution, but first we need to
figure out what the subproblems are.
Licensed under CSE 221: Algorithms 16 / 53
65. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
Licensed under CSE 221: Algorithms 17 / 53
66. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i j (“a natural order of the
subproblems”).
Licensed under CSE 221: Algorithms 17 / 53
67. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
Licensed under CSE 221: Algorithms 17 / 53
68. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
All we can say about ϑ is the following: interval n (the last
interval) either belongs to ϑ, or it doesn’t.
Licensed under CSE 221: Algorithms 17 / 53
69. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
All we can say about ϑ is the following: interval n (the last
interval) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then clearly all intervals that conflict with n are
not members of ϑ. ϑ then contains n, plus an
optimal solution to all intervals that do not
conflict with n. We now need to have a quick
way of computing list of conflicting intervals for
n.
Licensed under CSE 221: Algorithms 17 / 53
70. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let W be an instance of a weighted interval problem.
As in the greedy approach, we sort the intervals according to
finish times such that fi ≤ fj for i j (“a natural order of the
subproblems”).
Let ϑ be an optimal solution (even if we have no idea what it
is yet).
All we can say about ϑ is the following: interval n (the last
interval) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then clearly all intervals that conflict with n are
not members of ϑ. ϑ then contains n, plus an
optimal solution to all intervals that do not
conflict with n. We now need to have a quick
way of computing list of conflicting intervals for
n.
If n /
∈ ϑ Then ϑ contains an optimal solution for the
intervals {i1, i2, . . , in−1}.
Licensed under CSE 221: Algorithms 17 / 53
71. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Example (an instance of a weighted interval problem)
For each interval i, compute p(i), the rightmost interval among
the non-conflicting preceding intervals of i. Define p(j) = 0 if no
request i j is disjoint from j.
Licensed under CSE 221: Algorithms 18 / 53
72. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Example (an instance of a weighted interval problem)
For a given interval i, p(i) means that intervals
{p(i) + 1, p(i) + 2, . . . , i − 1} overlap with it. For example,
p(6) = 3, which means that intervals {4, 5} overlap interval 6.
Licensed under CSE 221: Algorithms 18 / 53
73. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Example (an instance of a weighted interval problem)
Alternatively, intervals {1, 2, . . , p(i)} do not overlap interval i.
For example, p(6) = 3 means that intervals {1, 2, 3} do not
overlap interval 6.
Licensed under CSE 221: Algorithms 18 / 53
74. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}.
Licensed under CSE 221: Algorithms 19 / 53
75. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
ϑ(n) = wn + ϑ(p(n))
Licensed under CSE 221: Algorithms 19 / 53
76. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
Licensed under CSE 221: Algorithms 19 / 53
77. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
ϑ(n) = ϑ(n − 1)
Licensed under CSE 221: Algorithms 19 / 53
78. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
ϑ(n) = ϑ(n − 1)
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
Licensed under CSE 221: Algorithms 19 / 53
79. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If n ∈ ϑ, then ϑ must include, in addition to interval n, an
optimal solution to the subproblem consisting of intervals
{1, 2, . . . , p(n)}. If ϑ(n) is an optimal solution to the
subproblem for intervals {1, 2, . . . , n}, then:
ϑ(n) = wn + ϑ(p(n))
If n /
∈ ϑ, then ϑ simply contains an optimal solution to the
subproblem consisting of the intervals {1, 2, . . . , n − 1}.
ϑ(n) = ϑ(n − 1)
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
ϑ(n) = max(wn + ϑ(p(n)), ϑ(n − 1))
Licensed under CSE 221: Algorithms 19 / 53
80. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j) is an optimal solution to the subproblem for intervals
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then:
OPT(j) = max(wj + OPT(p(j)), OPT(j − 1))
Licensed under CSE 221: Algorithms 20 / 53
81. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j) is an optimal solution to the subproblem for intervals
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then:
OPT(j) = max(wj + OPT(p(j)), OPT(j − 1))
Extracting the intervals in an optimal solution
The interval j is in an optimal solution OPT(j) if and only if the
first of the two options is larger than the second.
Licensed under CSE 221: Algorithms 20 / 53
82. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j) is an optimal solution to the subproblem for intervals
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, then:
OPT(j) = max(wj + OPT(p(j)), OPT(j − 1))
Extracting the intervals in an optimal solution
The interval j is in an optimal solution OPT(j) if and only if the
first of the two options is larger than the second.
Interval j belongs to an optimal solution on the set {1, 2, . . . , j} if
and only if
wj + OPT(p(j)) ≥ OPT(j − 1)
Licensed under CSE 221: Algorithms 20 / 53
83. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
Licensed under CSE 221: Algorithms 21 / 53
84. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in
non-decreasing order of the finishing times.
Licensed under CSE 221: Algorithms 21 / 53
85. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in
non-decreasing order of the finishing times.
The tree grows very rapidly, leading to exponential running
time. The tree when p(j) = j − 2 for all j shows how quickly
it grows.
Licensed under CSE 221: Algorithms 21 / 53
86. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
WIS(j)
1 if j = 0
2 then return 0
3 else return max(wj + WIS(p(j)),
WIS(j − 1))
The initial call is WIS(n) for intervals {1, 2, . . . , n} sorted in
non-decreasing order of the finishing times.
The tree grows very rapidly, leading to exponential running
time. The tree when p(j) = j − 2 for all j shows how quickly
it grows.
There are many overlapping subproblems, so the obvious
choice is to memoize the recursion.
Licensed under CSE 221: Algorithms 21 / 53
87. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Licensed under CSE 221: Algorithms 22 / 53
88. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Each entry in M[j] gets filled in only once at Θ(1) time, and
there are n + 1 entries, so M-WIS(n) takes Θ(n) time.
Licensed under CSE 221: Algorithms 22 / 53
89. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Each entry in M[j] gets filled in only once at Θ(1) time, and
there are n + 1 entries, so M-WIS(n) takes Θ(n) time.
Of course, sorting the intervals by the finish times takes
Θ(n lg n) time.
Licensed under CSE 221: Algorithms 22 / 53
90. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then M[j] ← max(wj + M-WIS(p(j)),
M-WIS(j − 1))
5 return M[j]
Each entry in M[j] gets filled in only once at Θ(1) time, and
there are n + 1 entries, so M-WIS(n) takes Θ(n) time.
Of course, sorting the intervals by the finish times takes
Θ(n lg n) time.
This memoized algorithm plus sorting the intervals takes
Θ(n lg n) + Θ(n) = Θ(n lg n) time.
Licensed under CSE 221: Algorithms 22 / 53
91. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
The memoized algorithm only computes the optimal value,
but does not extract the intervals that make up the solution.
The key to extracting the solution is to note that item j is in
ϑ if and only if wj + M[p(j)] ≥ M[j − 1]. This provides two
ways of extracting the intervals in the optimal solution:
1 Trace back from M[n] and extract the solution by checking
which choice was made – j − 1 or p(j) – when M[j] was
included in the optimal set of intervals.
2 Whenever a choice is made between two options, save in
pred[j], the predecessor pointer, the choice that was made
between j − 1 and p(j).
Licensed under CSE 221: Algorithms 23 / 53
92. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The first way recursively extracts an optimal set of intervals
for a problem size of 1 ≤ j ≤ n.
Calling WIS-find-solution(n) extracts all the intervals in
the optimal solution.
Licensed under CSE 221: Algorithms 24 / 53
93. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The first way recursively extracts an optimal set of intervals
for a problem size of 1 ≤ j ≤ n.
Calling WIS-find-solution(n) extracts all the intervals in
the optimal solution.
WIS-find-solution(j)
1 if j = 0
2 then Output nothing
3 else
4 if wj + M[p(j)] ≥ M[j − 1]
5 then Output j
6 WIS-find-solution(p(j))
7 else WIS-find-solution(j − 1)
Licensed under CSE 221: Algorithms 24 / 53
94. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The second way requires that M-WIS use an auxiliary array
pred[0 . . n] to save the predecessor of each interval in the
solution.
Initialize pred[j] = 0 for all 0 ≤ j ≤ n.
Licensed under CSE 221: Algorithms 25 / 53
95. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
The second way requires that M-WIS use an auxiliary array
pred[0 . . n] to save the predecessor of each interval in the
solution.
Initialize pred[j] = 0 for all 0 ≤ j ≤ n.
M-WIS(j)
1 if j = 0
2 then return 0
3 elseif M[j] is empty
4 then if wj + M-WIS(p(j)) M-WIS(j − 1))
5 then M[j] ← wj + M-WIS(p(j)
6 pred[j] ← p(j)
7 else M[j] ← M-WIS(j − 1)
8 pred[j] ← j − 1
9 return M[j]
Licensed under CSE 221: Algorithms 25 / 53
96. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
Now that we have pred[j] filled in, we start from M[n] and work
backwards.
1 If pred[j] = p(j), then we did add the jth interval in the final
solution, and we continue with pred[j] ← p(j).
2 if pred[j] 6= p(j), then we did not add the jth interval in the
final solution, and we continue with pred[j] ← j − 1.
Licensed under CSE 221: Algorithms 26 / 53
97. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
Now that we have pred[j] filled in, we start from M[n] and work
backwards.
1 If pred[j] = p(j), then we did add the jth interval in the final
solution, and we continue with pred[j] ← p(j).
2 if pred[j] 6= p(j), then we did not add the jth interval in the
final solution, and we continue with pred[j] ← j − 1.
WIS-find-solution(j)
1 if j = 0
2 then Output nothing
3 else
4 if pred[j] = p(j)
5 then Output j
6 WIS-find-solution(p(j))
7 else WIS-find-solution(j − 1)
Licensed under CSE 221: Algorithms 26 / 53
98. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values (continued)
Now that we have pred[j] filled in, we start from M[n] and work
backwards.
1 If pred[j] = p(j), then we did add the jth interval in the final
solution, and we continue with pred[j] ← p(j).
2 if pred[j] 6= p(j), then we did not add the jth interval in the
final solution, and we continue with pred[j] ← j − 1.
WIS-find-solution(j)
1 if j = 0
2 then Output nothing
3 else
4 if pred[j] = p(j)
5 then Output j
6 WIS-find-solution(p(j))
7 else WIS-find-solution(j − 1)
Can you come up with an iterative version?
Licensed under CSE 221: Algorithms 26 / 53
99. Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
Licensed under CSE 221: Algorithms 27 / 53
100. Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
We can build the table M[j] bottom-up, starting from the
base case of j = 0, up to n by using the memoized recursive
formulation: M[j] = max(wj + M[p(j)], M[j − 1]).
Licensed under CSE 221: Algorithms 27 / 53
101. Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
We can build the table M[j] bottom-up, starting from the
base case of j = 0, up to n by using the memoized recursive
formulation: M[j] = max(wj + M[p(j)], M[j − 1]).
Dynamic programming algorithm
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do M[j] = max(wj + M[p(j)], M[j − 1])
4 return M[n]
Licensed under CSE 221: Algorithms 27 / 53
102. Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
The value of an optimal solution OPT(j) for any
j ∈ {1, 2, 3, . . . , n} depends on the values of OPT(p(j)) and
OPT(j − 1).
We can build the table M[j] bottom-up, starting from the
base case of j = 0, up to n by using the memoized recursive
formulation: M[j] = max(wj + M[p(j)], M[j − 1]).
Dynamic programming algorithm
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do M[j] = max(wj + M[p(j)], M[j − 1])
4 return M[n]
T(n) = Θ(n)
Licensed under CSE 221: Algorithms 27 / 53
103. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do if wj + M[p(j)] M[j − 1]
4 then M[j] = wj + M[p(j)]
5 pred[j] = p(j)
6 else M[j] = M[j − 1]
7 pred[j] = j − 1
8 return M[n]
Licensed under CSE 221: Algorithms 28 / 53
104. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
WIS(n)
1 M[0] ← 0
2 for j ← 1 to n
3 do if wj + M[p(j)] M[j − 1]
4 then M[j] = wj + M[p(j)]
5 pred[j] = p(j)
6 else M[j] = M[j − 1]
7 pred[j] = j − 1
8 return M[n]
WIS-find-solution(j)
1 j ← n
2 while j 0
3 do if pred[j] = p(j)
4 then Output j
5 j ← pred[j]
Licensed under CSE 221: Algorithms 28 / 53
113. Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
Licensed under CSE 221: Algorithms 30 / 53
114. Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
2 What if we didn’t sort the requests at all? Would it still work?
Licensed under CSE 221: Algorithms 30 / 53
115. Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
2 What if we didn’t sort the requests at all? Would it still work?
3 If all the weights are the same, what does this problem
become?
Licensed under CSE 221: Algorithms 30 / 53
116. Introduction Memoization Dynamic programming Weighted interval sched
So, you think you understand Dynamic Programming now?
Answer the following questions
1 Instead of sorting the intervals by finish time, what if we
sorted the requests by start time?
2 What if we didn’t sort the requests at all? Would it still work?
3 If all the weights are the same, what does this problem
become? Can you solve it using DP?
Licensed under CSE 221: Algorithms 30 / 53
117. Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 31 / 53
118. Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Licensed under CSE 221: Algorithms 32 / 53
119. Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S vi , subject to
P
i∈S wi ≤ W .
Licensed under CSE 221: Algorithms 32 / 53
120. Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S vi , subject to
P
i∈S wi ≤ W .
Maximum weight: W = 4 kg
Licensed under CSE 221: Algorithms 32 / 53
121. Introduction Memoization Dynamic programming Weighted interval sched
0/1 knapsack problem
Definition (0/1 knapsack problem)
Given a set S of n items, such that each item i has a positive
benefit vi and a positive weight wi , the goal is to find the
maximum-benefit subset that does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S vi , subject to
P
i∈S wi ≤ W .
Maximum weight: W = 4 kg
Optimal solution: items B and C Benefit: 370
Licensed under CSE 221: Algorithms 32 / 53
122. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Licensed under CSE 221: Algorithms 33 / 53
123. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
Licensed under CSE 221: Algorithms 33 / 53
124. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Licensed under CSE 221: Algorithms 33 / 53
125. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
Licensed under CSE 221: Algorithms 33 / 53
126. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then the optimal solution contains n, plus an
optimal solution for the other n − 1 items, but
with a reduced maximum weight of W − wn.
Licensed under CSE 221: Algorithms 33 / 53
127. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then the optimal solution contains n, plus an
optimal solution for the other n − 1 items, but
with a reduced maximum weight of W − wn.
If n /
∈ ϑ Then ϑ simply contains an optimal solution for
the first n − 1 items, with the maximum allowed
weight W remaining unchanged.
Licensed under CSE 221: Algorithms 33 / 53
128. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Let S be an instance of a 0/1 Knapsack problem, and ϑ be an
optimal solution (even if we have no idea what it is yet).
Note that the presence of an item i in ϑ does not preclude
any other item j 6= i in ϑ.
If item n weighs more than the maximum allowed weight, it
will not be in ϑ.
Otherwise, all we can say about ϑ is the following: item n
(the last one) either belongs to ϑ, or it doesn’t.
If n ∈ ϑ Then the optimal solution contains n, plus an
optimal solution for the other n − 1 items, but
with a reduced maximum weight of W − wn.
If n /
∈ ϑ Then ϑ simply contains an optimal solution for
the first n − 1 items, with the maximum allowed
weight W remaining unchanged.
We have two parameters for each subproblem – the items S,
and the maximum allowed weight W .
Licensed under CSE 221: Algorithms 33 / 53
129. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn W =⇒ n /
∈ ϑ.
ϑ(n, W ) = ϑ(n − 1, W )
Licensed under CSE 221: Algorithms 34 / 53
130. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn W =⇒ n /
∈ ϑ.
ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
Licensed under CSE 221: Algorithms 34 / 53
131. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn W =⇒ n /
∈ ϑ.
ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
If n /
∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to
the subproblem consisting of the intervals {1, 2, . . . , n − 1}:
ϑ(n, W ) = ϑ(n − 1, W )
Licensed under CSE 221: Algorithms 34 / 53
132. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn W =⇒ n /
∈ ϑ.
ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
If n /
∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to
the subproblem consisting of the intervals {1, 2, . . . , n − 1}:
ϑ(n, W ) = ϑ(n − 1, W )
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
Licensed under CSE 221: Algorithms 34 / 53
133. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
wn W =⇒ n /
∈ ϑ.
ϑ(n, W ) = ϑ(n − 1, W )
Otherwise, n is either ∈ ϑ or /
∈ ϑ.
If n ∈ ϑ, then ϑ(n, W ) is an optimal solution to the
subproblem for items {1, 2, . . . , n}:
ϑ(n, W ) = vn + ϑ(n − 1, W − wn)
If n /
∈ ϑ, then ϑ(n, W ) simply contains an optimal solution to
the subproblem consisting of the intervals {1, 2, . . . , n − 1}:
ϑ(n, W ) = ϑ(n − 1, W )
Since an optimal solution must maximize the sum of the
weights in the intervals it contains, we accept the larger of the
two.
ϑ(n, W ) = max(vn + ϑ(n − 1, W − wn), ϑ(n − 1, W ))
Licensed under CSE 221: Algorithms 34 / 53
134. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j, w) is an optimal solution to the subproblem for items
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, and with a maximum
allowed weight of w, then:
OPT(j, w) =
OPT(j − 1, w) if wj w,
max(vj + OPT(j − 1, w − wj),
OPT(j − 1, w)) otherwise.
Licensed under CSE 221: Algorithms 35 / 53
135. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
Recursive algorithm for an optimal value
If OPT(j, w) is an optimal solution to the subproblem for items
{1, 2, . . . , j}, for any j ∈ {1, 2, . . . , n}, and with a maximum
allowed weight of w, then:
OPT(j, w) =
OPT(j − 1, w) if wj w,
max(vj + OPT(j − 1, w − wj),
OPT(j − 1, w)) otherwise.
Extracting the items in an optimal solution
The item j is in an optimal solution OPT(j, w) if and only if the
first of the two options is larger than the second.
vj + OPT(j − 1, w − wj) ≥ OPT(j − 1, w)
Licensed under CSE 221: Algorithms 35 / 53
136. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
Licensed under CSE 221: Algorithms 36 / 53
137. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
The initial call is Knapsack(n, W ).
Licensed under CSE 221: Algorithms 36 / 53
138. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
The initial call is Knapsack(n, W ).
The tree grows very rapidly, leading to exponential running
time.
Licensed under CSE 221: Algorithms 36 / 53
139. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif wj w
4 then return Knapsack(j − 1, w))
5 else return max(vj + Knapsack(j − 1, w − wj),
Knapsack(j − 1, w))
The initial call is Knapsack(n, W ).
The tree grows very rapidly, leading to exponential running
time.
There are many overlapping subproblems, so the obvious
choice is to memoize the recursion.
Licensed under CSE 221: Algorithms 36 / 53
140. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Licensed under CSE 221: Algorithms 37 / 53
141. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Each entry in M[j, w] gets filled in only once at Θ(1) time,
and there are n + 1 × W + 1 entries, so M-Knapsack(n, W )
takes Θ(nW ) time.
Licensed under CSE 221: Algorithms 37 / 53
142. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Each entry in M[j, w] gets filled in only once at Θ(1) time,
and there are n + 1 × W + 1 entries, so M-Knapsack(n, W )
takes Θ(nW ) time.
Is this a linear-time algorithm?
Licensed under CSE 221: Algorithms 37 / 53
143. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Knapsack(j, w)
1 if j = 0 or w = 0
2 then return 0
3 elseif M[j, w] is empty
4 then M[j, w] ← max(vj + M-Knapsack(j − 1, w − wj),
M-Knapsack(j − 1, w))
5 return M[j, w]
Each entry in M[j, w] gets filled in only once at Θ(1) time,
and there are n + 1 × W + 1 entries, so M-Knapsack(n, W )
takes Θ(nW ) time.
Is this a linear-time algorithm?
This is an example of a pseudo-polynomial problem, since it
depends on another parameter W that is independent of the
problem size.
Licensed under CSE 221: Algorithms 37 / 53
144. Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
Knapsack(n, W )
1 for i ← 0 to n no remaining capacity
2 do M[i, 0] ← 0
3 for w ← 0 to W no item to choose from
4 do M[0, w] ← 0
5 for j ← 1 to n
6 do for w ← 1 to W
7 do if wj w //we cannot take object j
8 then M[j, w] = M[j − 1, w]
9 else M[j, w] ← max(vj + M[j − 1, w − wj],
M[j − 1, w])
10 return M[n, W ]
Licensed under CSE 221: Algorithms 38 / 53
145. Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack recursive algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 39 / 53
146. Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack recursive algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 39 / 53
147. Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack DP algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 40 / 53
148. Introduction Memoization Dynamic programming Weighted interval sched
0/1 Knapsack DP algorithm in action
Given the following (from M. H. Alsuwaiyel, ex. 7.6):
W = 9
wi = {2, 3, 4, 5}
vi = {3, 4, 5, 7}
Licensed under CSE 221: Algorithms 40 / 53
149. Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Licensed under CSE 221: Algorithms 41 / 53
150. Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S wi , subject to
P
i∈S wi ≤ W .
Licensed under CSE 221: Algorithms 41 / 53
151. Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S wi , subject to
P
i∈S wi ≤ W .
How is this similar to the 0/1 Knapsack problem?
Licensed under CSE 221: Algorithms 41 / 53
152. Introduction Memoization Dynamic programming Weighted interval sched
Related problem: Subset Sums problem
Definition (Subset Sums problem)
Given a set S of n items, such that each item i has a positive
weight wi , the goal is to find the maximum-weight subset that
does not exceed a given weight W .
Formally, we wish to determine a subset of S that maximizes
P
i∈S wi , subject to
P
i∈S wi ≤ W .
How is this similar to the 0/1 Knapsack problem?
Can you solve this using the same algorithm?
Licensed under CSE 221: Algorithms 41 / 53
153. Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 42 / 53
154. Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Licensed under CSE 221: Algorithms 43 / 53
155. Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
Licensed under CSE 221: Algorithms 43 / 53
156. Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
Licensed under CSE 221: Algorithms 43 / 53
157. Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0
Licensed under CSE 221: Algorithms 43 / 53
158. Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0
Solution: 3 coins.
Licensed under CSE 221: Algorithms 43 / 53
159. Introduction Memoization Dynamic programming Weighted interval sched
Coin changing problem
Definition
Given coin denominations in C = {ci }, make change for a given
amount A with the minimum number of coins.
Example
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
1 Choose 0 12 coins, so remaining is 15
2 Choose 3 5 coins, so remaining is 15 − 3 ∗ 5 = 0
Solution: 3 coins.
Questions
What is the natural search space? Does this problem have a
Dynamic Programming solution? If so, how do we develop it?
Licensed under CSE 221: Algorithms 43 / 53
160. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
Licensed under CSE 221: Algorithms 44 / 53
161. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
Licensed under CSE 221: Algorithms 44 / 53
162. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
Licensed under CSE 221: Algorithms 44 / 53
163. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
Licensed under CSE 221: Algorithms 44 / 53
164. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Licensed under CSE 221: Algorithms 44 / 53
165. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Since we’re minimizing the number of coins, the best
combination would be the minimum of these three choices.
Licensed under CSE 221: Algorithms 44 / 53
166. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Since we’re minimizing the number of coins, the best
combination would be the minimum of these three choices.
By recursively solving for the best combination, this can be
generalized to |C| denominations to make change for any
amount A.
Licensed under CSE 221: Algorithms 44 / 53
167. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution
Coin denominations, C = {12, 5, 1} Amount to change, A = 15
The best combination of coins for 15 paisa must be one of the
following:
1 Best combination for 15 − 12 = 3 paisa, plus a 12 paisa coin.
2 Best combination for 15 − 5 = 10 paisa, plus a 5 paisa coin.
3 Best combination for 15 − 1 = 14 paisa, plus a 1 paisa coin.
Since we’re minimizing the number of coins, the best
combination would be the minimum of these three choices.
By recursively solving for the best combination, this can be
generalized to |C| denominations to make change for any
amount A.
What are the subproblems?
Licensed under CSE 221: Algorithms 44 / 53
168. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
Licensed under CSE 221: Algorithms 45 / 53
169. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Licensed under CSE 221: Algorithms 45 / 53
170. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Licensed under CSE 221: Algorithms 45 / 53
171. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Since we don’t know which coin would be chosen, we have to
search all |C| denominations and find the minimum.
Licensed under CSE 221: Algorithms 45 / 53
172. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Since we don’t know which coin would be chosen, we have to
search all |C| denominations and find the minimum.
The number of coins for 0 amount is 0.
Licensed under CSE 221: Algorithms 45 / 53
173. Introduction Memoization Dynamic programming Weighted interval sched
Developing a recursive solution (continued)
If OPT(p) is the minimum number of coins needed to make change
for amount p with denominations C = {c1, c2, . . . , ck}, then:
The coin ci chosen at any step must be smaller than p, the
amount left at that point.
Once we choose ci ≤ p, OPT(p) = 1 + OPT(p − ci ), since we
have to find the best combination for the remaining amount
(picking a coin smaller than the amount at each step).
Since we don’t know which coin would be chosen, we have to
search all |C| denominations and find the minimum.
The number of coins for 0 amount is 0.
Recurrence
OPT(p) =
(
0 if p = 0
mini:ci ≤p{1 + OPT(p − ci )} if p 0
Licensed under CSE 221: Algorithms 45 / 53
174. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C) min
6 then min ← 1 + Change(n − ci , C)
Licensed under CSE 221: Algorithms 46 / 53
175. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C) min
6 then min ← 1 + Change(n − ci , C)
The initial call is Change(A, C).
Licensed under CSE 221: Algorithms 46 / 53
176. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C) min
6 then min ← 1 + Change(n − ci , C)
The initial call is Change(A, C).
The tree grows very rapidly, leading to exponential running
time.
Licensed under CSE 221: Algorithms 46 / 53
177. Introduction Memoization Dynamic programming Weighted interval sched
A recursive algorithm
Change(n, C)
1 if n = 0
2 then return 0
3 else min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ n and 1 + Change(n − ci , C) min
6 then min ← 1 + Change(n − ci , C)
The initial call is Change(A, C).
The tree grows very rapidly, leading to exponential running
time.
There are many overlapping subproblems, so the obvious
choice is to memoize the recursion.
Licensed under CSE 221: Algorithms 46 / 53
178. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Change(n, C)
1 if n = 0
2 then return 0
3 else if M[n] is empty
4 then min ← ∞
5 for i ← 1 to |C|
6 do if ci ≤ n and
1 + M-Change(n − ci , C) min
7 then min ← 1 + M-Change(n − ci , C)
8 M[n] ← min
9 return M[n]
Licensed under CSE 221: Algorithms 47 / 53
179. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Change(n, C)
1 if n = 0
2 then return 0
3 else if M[n] is empty
4 then min ← ∞
5 for i ← 1 to |C|
6 do if ci ≤ n and
1 + M-Change(n − ci , C) min
7 then min ← 1 + M-Change(n − ci , C)
8 M[n] ← min
9 return M[n]
Each entry in M[n] gets filled in only once at Θ(|C|) time,
and there are n + 1 entries, so M-Change(n) takes
Θ(n|C|) time.
Licensed under CSE 221: Algorithms 47 / 53
180. Introduction Memoization Dynamic programming Weighted interval sched
Memoizing the recursion
M-Change(n, C)
1 if n = 0
2 then return 0
3 else if M[n] is empty
4 then min ← ∞
5 for i ← 1 to |C|
6 do if ci ≤ n and
1 + M-Change(n − ci , C) min
7 then min ← 1 + M-Change(n − ci , C)
8 M[n] ← min
9 return M[n]
Each entry in M[n] gets filled in only once at Θ(|C|) time,
and there are n + 1 entries, so M-Change(n) takes
Θ(n|C|) time.
Another pseudo-polynomial problem!
Licensed under CSE 221: Algorithms 47 / 53
181. Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
Change(n, C)
M = [0 . . n], S = [0 . . n]
1 M[0] ← 0 no amount to change
2 for p ← 1 to n
3 do min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ p and 1 + M[p − ci ] min
6 then min ← 1 + M[p − ci ]
7 coin ← i
8 M[p] ← min
9 S[p] ← coin
10 return M and S
Licensed under CSE 221: Algorithms 48 / 53
182. Introduction Memoization Dynamic programming Weighted interval sched
Developing a Dynamic Programming algorithm
Change(n, C)
M = [0 . . n], S = [0 . . n]
1 M[0] ← 0 no amount to change
2 for p ← 1 to n
3 do min ← ∞
4 for i ← 1 to |C|
5 do if ci ≤ p and 1 + M[p − ci ] min
6 then min ← 1 + M[p − ci ]
7 coin ← i
8 M[p] ← min
9 S[p] ← coin
10 return M and S
M[p] for all 0 ≤ p ≤ n – minimum number of coins needed to
change for p paisa.
S[p] for all 0 ≤ p ≤ n – the first coin chosen in computing an
optimal solution for making change for p paise.
Licensed under CSE 221: Algorithms 48 / 53
183. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
The S array in the algorithm “remembers” the first coin we
use when computing an optimal value for a given amount.
We go backwards using S[n] until n = 0 and find the coin that
was added at each step.
Licensed under CSE 221: Algorithms 49 / 53
184. Introduction Memoization Dynamic programming Weighted interval sched
Computing a solution in addition to its values
The S array in the algorithm “remembers” the first coin we
use when computing an optimal value for a given amount.
We go backwards using S[n] until n = 0 and find the coin that
was added at each step.
Coins(S, C, n)
1 while n 0
2 do Output S[n]
3 n ← n − CS[n]
Licensed under CSE 221: Algorithms 49 / 53
185. Introduction Memoization Dynamic programming Weighted interval sched
Contents
1 Introduction
Memoization
Dynamic programming
Weighted interval scheduling problem
0/1 Knapsack problem
Coin changing problem
What problems can be solved by DP?
Conclusion
Licensed under CSE 221: Algorithms 50 / 53
186. Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
Licensed under CSE 221: Algorithms 51 / 53
187. Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
Licensed under CSE 221: Algorithms 51 / 53
188. Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
If a problem has the following properties, then it’s likely to
have a dynamic programming solution.
Licensed under CSE 221: Algorithms 51 / 53
189. Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
If a problem has the following properties, then it’s likely to
have a dynamic programming solution.
Polynomially many subproblems The total number of
subproblems should be a polynomial, or else DP
may not provide an efficient solution.
Licensed under CSE 221: Algorithms 51 / 53
190. Introduction Memoization Dynamic programming Weighted interval sched
Problem types solved by Dynamic Programming
The most important part of DP is to set up the subproblem
structure.
DP is not applicable to all optimization problems.
If a problem has the following properties, then it’s likely to
have a dynamic programming solution.
Polynomially many subproblems The total number of
subproblems should be a polynomial, or else DP
may not provide an efficient solution.
Subproblem optimality If the optimal solution to the entire
problem contain optimal solution to the
subproblems, then it has the subproblem
optimality property. Also called the principle of
optimality.
Licensed under CSE 221: Algorithms 51 / 53
191. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Licensed under CSE 221: Algorithms 52 / 53
192. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Licensed under CSE 221: Algorithms 52 / 53
193. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Developing a Dynamic Programming solution often requires
some thought into the subproblems, especially how to find the
natural order in which to solve the subproblems.
Licensed under CSE 221: Algorithms 52 / 53
194. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Developing a Dynamic Programming solution often requires
some thought into the subproblems, especially how to find the
natural order in which to solve the subproblems.
Unlike Memoization, which solves only the needed
subproblems, DP solves all the subproblems, because it does it
bottom-up.
Licensed under CSE 221: Algorithms 52 / 53
195. Introduction Memoization Dynamic programming Weighted interval sched
Dynamic Programming highlights
Dynamic Programming, just like Memoization, avoids
computing solutions to overlapping subproblems by saving
intermediate results, and thus both require space for the
“table”.
Dynamic Programming is a bottom-up techniques, and finds
the solution by starting from the base case(s) and works its
way upwards.
Developing a Dynamic Programming solution often requires
some thought into the subproblems, especially how to find the
natural order in which to solve the subproblems.
Unlike Memoization, which solves only the needed
subproblems, DP solves all the subproblems, because it does it
bottom-up.
Dynamic Programming on the other hand may be much more
efficient because its iterative, whereas Memoization must pay
for the (often significant) overhead due to recursion.
Licensed under CSE 221: Algorithms 52 / 53
196. Introduction Memoization Dynamic programming Weighted interval sched
Conclusion
Memoization is the top-down technique, and dynamic
programming is a bottom-up technique.
The key to Dynamic programming is in “intelligent” recursion
(the hard part), not in filling up the table (the easy part).
Dynamic Programming has the potential to transform
exponential-time brute-force solutions into polynomial-time
algorithms.
Greed does not pay, Dynamic Programming does!
Licensed under CSE 221: Algorithms 53 / 53