Dynamic Programming vs Divide-and-Conquer
Last Updated :
22 Feb, 2023
In this article I’m trying to explain the difference/similarities between dynamic programming and divide and conquer approaches based on two examples: binary search and minimum edit distance (Levenshtein distance).
The Problem
When I started to learn algorithms it was hard for me to understand the main idea of dynamic programming (DP) and how it is different from divide-and-conquer (DC) approach. When it gets to comparing those two paradigms usually Fibonacci function comes to the rescue as great example. But when we’re trying to solve the same problem using both DP and DC approaches to explain each of them, it feels for me like we may lose valuable detail that might help to catch the difference faster. And these detail tells us that each technique serves best for different types of problems.
I’m still in the process of understanding DP and DC difference and I can’t say that I’ve fully grasped the concepts so far. But I hope this article will shed some extra light and help you to do another step of learning such valuable algorithm paradigms as dynamic programming and divide-and-conquer.
Dynamic Programming and Divide-and-Conquer Similarities
As I see it for now I can say that dynamic programming is an extension of divide and conquer paradigm.
I would not treat them as something completely different. Because they both work by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
So why do we still have different paradigm names then and why I called dynamic programming an extension. It is because dynamic programming approach may be applied to the problem only if the problem has certain restrictions or prerequisites. And after that dynamic programming extends divide and conquer approach with memoization or tabulation technique.
Let’s go step by step…
Dynamic Programming Prerequisites/Restrictions
As we’ve just discovered there are two key attributes that divide and conquer problem must have in order for dynamic programming to be applicable:
Once these two conditions are met we can say that this divide and conquer problem may be solved using dynamic programming approach.
Dynamic Programming Extension for Divide and Conquer
Dynamic programming approach extends divide and conquer approach with two techniques (memoization and tabulation) that both have a purpose of storing and re-using sub-problems solutions that may drastically improve performance. For example naive recursive implementation of Fibonacci function has time complexity of O(2^n) where DP solution doing the same with only O(n) time.
Memoization (top-down cache filling) refers to the technique of caching and reusing previously computed results. The memoized fib function would thus look like this:
memFib(n) {
if (mem[n] is undefined)
if (n < 2) result = n
else result = memFib(n-2) + memFib(n-1)
mem[n] = result
return mem[n]
}
Tabulation (bottom-up cache filling) is similar but focuses on filling the entries of the cache. Computing the values in the cache is easiest done iteratively. The tabulation version of fib would look like this:
tabFib(n) {
mem[0] = 0
mem[1] = 1
for i = 2...n
mem[i] = mem[i-2] + mem[i-1]
return mem[n]
}
You may read more about memoization and tabulation comparison here.
The main idea you should grasp here is that because our divide and conquer problem has overlapping sub-problems the caching of sub-problem solutions becomes possible and thus memoization/tabulation step up onto the scene.
So What the Difference Between DP and DC After All
Since we’re now familiar with DP prerequisites and its methodologies we’re ready to put all that was mentioned above into one picture.
Dynamic programming and divide and conquer paradigms dependency
Let’s go and try to solve some problems using DP and DC approaches to make this illustration more clear.
Divide and Conquer Example: Binary Search
Binary search algorithm, also known as half-interval search, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
Example
Here is a visualization of the binary search algorithm where 4 is the target value.
Binary search algorithm logic
Let’s draw the same logic but in form of decision tree.
Binary search algorithm decision tree
You may clearly see here a divide and conquer principle of solving the problem. We’re iteratively breaking the original array into sub-arrays and trying to find required element in there.
Can we apply dynamic programming to it? No. It is because there are no overlapping sub-problems. Every time we split the array into completely independent parts. And according to divide and conquer prerequisites/restrictions the sub-problems must be overlapped somehow.
Normally every time you draw a decision tree and it is actually a tree (and not a decision graph) it would mean that you don’t have overlapping sub-problems and this is not dynamic programming problem.
The Code
Here you may find complete source code of binary search function with test cases and explanations.
function binarySearch(sortedArray, seekElement) {
let startIndex = 0;
let endIndex = sortedArray.length - 1;
while (startIndex <= endIndex) {
const middleIndex = startIndex + Math.floor((endIndex - startIndex) / 2);
// If we've found the element just return its position.
if (sortedArray[middleIndex] === seekElement)) {
return middleIndex;
}
// Decide which half to choose: left or right one.
if (sortedArray[middleIndex] < seekElement)) {
// Go to the right half of the array.
startIndex = middleIndex + 1;
} else {
// Go to the left half of the array.
endIndex = middleIndex - 1;
}
}
return -1;
}
Dynamic Programming Example: Minimum Edit Distance
Normally when it comes to dynamic programming examples the Fibonacci number algorithm is being taken by default. But let’s take a little bit more complex algorithm to have some kind of variety that should help us to grasp the concept.
Minimum Edit Distance (or Levenshtein Distance) is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.
Example
For example, the Levenshtein distance between “kitten” and “sitting” is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits:
Applications
This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, fuzzy string searching, and software to assist natural language translation based on translation memory.
Mathematical Definition
Mathematically, the Levenshtein distance between two strings a, b (of length |a| and |b| respectively) is given by function lev(|a|, |b|) where
Note that the first element in the minimum corresponds to deletion (from a to b), the second to insertion and the third to match or mismatch, depending on whether the respective symbols are the same.
Explanation
Ok, let’s try to figure out what that formula is talking about. Let’s take a simple example of finding minimum edit distance between strings ME and MY. Intuitively you already know that minimum edit distance here is 1 operation and this operation is “replace E with Y”. But let’s try to formalize it in a form of the algorithm in order to be able to do more complex examples like transforming Saturday into Sunday.
To apply the formula to ME>MY transformation we need to know minimum edit distances of ME>M, M>MY and M>M transformations in prior. Then we will need to pick the minimum one and add +1 operation to transform last letters E?Y.
So we can already see here a recursive nature of the solution: minimum edit distance of ME>MY transformation is being calculated based on three previously possible transformations. Thus we may say that this is divide and conquer algorithm.
To explain this further let’s draw the following matrix.
Simple example of finding minimum edit distance between ME and MY strings
Cell (0, 1) contains red number 1. It means that we need 1 operation to transform M to empty string: delete M. This is why this number is red.
Cell (0, 2) contains red number 2. It means that we need 2 operations to transform ME to empty string: delete E, delete M.
Cell (1, 0) contains green number 1. It means that we need 1 operation to transform empty string to M: insert M. This is why this number is green.
Cell (2, 0) contains green number 2. It means that we need 2 operations to transform empty string to MY: insert Y, insert M.
Cell (1, 1) contains number 0. It means that it costs nothing to transform M to M.
Cell (1, 2) contains red number 1. It means that we need 1 operation to transform ME to M: delete E.
And so on…
This looks easy for such small matrix as ours (it is only 3x3). But how we could calculate all those numbers for bigger matrices (let’s say 9x7 one, for Saturday>Sunday transformation)?
The good news is that according to the formula you only need three adjacent cells (i-1, j), (i-1, j-1), and (i, j-1) to calculate the number for current cell (i, j) . All we need to do is to find the minimum of those three cells and then add +1 in case if we have different letters in i-s row and j-s column
So once again you may clearly see the recursive nature of the problem.
Recursive nature of minimum edit distance problem
Ok we’ve just found out that we’re dealing with divide and conquer problem here. But can we apply dynamic programming approach to it? Does this problem satisfies our overlapping sub-problems and optimal substructure restrictions? Yes. Let’s see it from decision graph.
Decision graph for minimum edit distance with overlapping sub-problems
First of all this is not a decision tree. It is a decision graph. You may see a number of overlapping subproblems on the picture that are marked with red. Also there is no way to reduce the number of operations and make it less then a minimum of those three adjacent cells from the formula.
Also you may notice that each cell number in the matrix is being calculated based on previous ones. Thus the tabulation technique (filling the cache in bottom-up direction) is being applied here. You’ll see it in code example below.
Applying these principles further we may solve more complicated cases like with Saturday > Sunday transformation.
Minimum edit distance to convert Saturday to Sunday
The Code
Here you may find complete source code of minimum edit distance function with test cases and explanations.
function levenshteinDistance(a, b) {
const distanceMatrix = Array(b.length + 1)
.fill(null)
.map(
() => Array(a.length + 1).fill(null)
);
for (let i = 0; i <= a.length; i += 1) {
distanceMatrix[0][i] = i;
}
for (let j = 0; j <= b.length; j += 1) {
distanceMatrix[j][0] = j;
}
for (let j = 1; j <= b.length; j += 1) {
for (let i = 1; i <= a.length; i += 1) {
const indicator = a[i - 1] === b[j - 1] ? 0 : 1;
distanceMatrix[j][i] = Math.min(
distanceMatrix[j][i - 1] + 1, // deletion
distanceMatrix[j - 1][i] + 1, // insertion
distanceMatrix[j - 1][i - 1] + indicator, // substitution
);
}
}
return distanceMatrix[b.length][a.length];
}
Conclusion
In this article we have compared two algorithmic approaches such as dynamic programming and divide-and-conquer. We’ve found out that dynamic programming is based on divide and conquer principle and may be applied only if the problem has overlapping sub-problems and optimal substructure (like in Levenshtein distance case). Dynamic programming then is using memoization or tabulation technique to store solutions of overlapping sub-problems for later usage.
I hope this article hasn’t brought you more confusion but rather shed some light on these two important algorithmic concepts! :)
You may find more examples of divide and conquer and dynamic programming problems with explanations, comments and test cases in JavaScript Algorithms and Data Structures repository.
Happy coding!
Dynamic programming and divide-and-conquer are two commonly used algorithms design techniques that can be used to solve a variety of problems.
- Dynamic Programming is a technique used for solving problems by breaking them down into smaller overlapping subproblems and storing the results of these subproblems to avoid redundant computation. It is used when the problem exhibits optimal substructure, meaning that the optimal solution can be constructed from optimal solutions of subproblems. It is well-suited for problems that have an inherent sequence, such as making a sequence of decisions. Some common examples of problems solved using dynamic programming include finding the longest common subsequence, the shortest path in a graph, and the Fibonacci sequence.
- Divide-and-Conquer is a technique used for solving problems by breaking them down into smaller subproblems that can be solved independently and then combining these solutions to obtain the solution to the original problem. It is used when the problem can be divided into smaller subproblems that are similar to the original problem, and the solution to the subproblems can be combined to form a solution to the original problem. Some common examples of problems solved using divide-and-conquer include the quick sort algorithm, binary search, and the merge sort algorithm.
In summary, dynamic programming is used when the problem has an optimal substructure and can be solved using a bottom-up approach, while divide-and-conquer is used when the problem can be divided into smaller subproblems that are similar to the original problem and solved using a top-down approach.
Dynamic Programming vs Divide-and-Conquer
Similar Reads
Basics & Prerequisites
Data Structures
Array Data StructureIn this article, we introduce array, implementation in different popular languages, its basic operations and commonly seen problems / interview questions. An array stores items (in case of C/C++ and Java Primitive Arrays) or their references (in case of Python, JS, Java Non-Primitive) at contiguous
3 min read
String in Data StructureA string is a sequence of characters. The following facts make string an interesting data structure.Small set of elements. Unlike normal array, strings typically have smaller set of items. For example, lowercase English alphabet has only 26 characters. ASCII has only 256 characters.Strings are immut
2 min read
Hashing in Data StructureHashing is a technique used in data structures that efficiently stores and retrieves data in a way that allows for quick access. Hashing involves mapping data to a specific index in a hash table (an array of items) using a hash function. It enables fast retrieval of information based on its key. The
2 min read
Linked List Data StructureA linked list is a fundamental data structure in computer science. It mainly allows efficient insertion and deletion operations compared to arrays. Like arrays, it is also used to implement other data structures like stack, queue and deque. Hereâs the comparison of Linked List vs Arrays Linked List:
2 min read
Stack Data StructureA Stack is a linear data structure that follows a particular order in which the operations are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). LIFO implies that the element that is inserted last, comes out first and FILO implies that the element that is inserted first
2 min read
Queue Data StructureA Queue Data Structure is a fundamental concept in computer science used for storing and managing data in a specific order. It follows the principle of "First in, First out" (FIFO), where the first element added to the queue is the first one to be removed. It is used as a buffer in computer systems
2 min read
Tree Data StructureTree Data Structure is a non-linear data structure in which a collection of elements known as nodes are connected to each other via edges such that there exists exactly one path between any two nodes. Types of TreeBinary Tree : Every node has at most two childrenTernary Tree : Every node has at most
4 min read
Graph Data StructureGraph Data Structure is a collection of nodes connected by edges. It's used to represent relationships between different entities. If you are looking for topic-wise list of problems on different topics like DFS, BFS, Topological Sort, Shortest Path, etc., please refer to Graph Algorithms. Basics of
3 min read
Trie Data StructureThe Trie data structure is a tree-like structure used for storing a dynamic set of strings. It allows for efficient retrieval and storage of keys, making it highly effective in handling large datasets. Trie supports operations such as insertion, search, deletion of keys, and prefix searches. In this
15+ min read
Algorithms
Searching AlgorithmsSearching algorithms are essential tools in computer science used to locate specific items within a collection of data. In this tutorial, we are mainly going to focus upon searching in an array. When we search an item in an array, there are two most common algorithms used based on the type of input
2 min read
Sorting AlgorithmsA Sorting Algorithm is used to rearrange a given array or list of elements in an order. For example, a given array [10, 20, 5, 2] becomes [2, 5, 10, 20] after sorting in increasing order and becomes [20, 10, 5, 2] after sorting in decreasing order. There exist different sorting algorithms for differ
3 min read
Introduction to RecursionThe process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called a recursive function. A recursive algorithm takes one step toward solution and then recursively call itself to further move. The algorithm stops once we reach the solution
14 min read
Greedy AlgorithmsGreedy algorithms are a class of algorithms that make locally optimal choices at each step with the hope of finding a global optimum solution. At every step of the algorithm, we make a choice that looks the best at the moment. To make the choice, we sometimes sort the array so that we can always get
3 min read
Graph AlgorithmsGraph is a non-linear data structure like tree data structure. The limitation of tree is, it can only represent hierarchical data. For situations where nodes or vertices are randomly connected with each other other, we use Graph. Example situations where we use graph data structure are, a social net
3 min read
Dynamic Programming or DPDynamic Programming is an algorithmic technique with the following properties.It is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of
3 min read
Bitwise AlgorithmsBitwise algorithms in Data Structures and Algorithms (DSA) involve manipulating individual bits of binary representations of numbers to perform operations efficiently. These algorithms utilize bitwise operators like AND, OR, XOR, NOT, Left Shift, and Right Shift.BasicsIntroduction to Bitwise Algorit
4 min read
Advanced
Segment TreeSegment Tree is a data structure that allows efficient querying and updating of intervals or segments of an array. It is particularly useful for problems involving range queries, such as finding the sum, minimum, maximum, or any other operation over a specific range of elements in an array. The tree
3 min read
Pattern SearchingPattern searching algorithms are essential tools in computer science and data processing. These algorithms are designed to efficiently find a particular pattern within a larger set of data. Patten SearchingImportant Pattern Searching Algorithms:Naive String Matching : A Simple Algorithm that works i
2 min read
GeometryGeometry is a branch of mathematics that studies the properties, measurements, and relationships of points, lines, angles, surfaces, and solids. From basic lines and angles to complex structures, it helps us understand the world around us.Geometry for Students and BeginnersThis section covers key br
2 min read
Interview Preparation
Practice Problem