Dynamic Programming 101 | Types, Examples, and Use-Cases
Say you're planning a road trip across the country. You've got a list of cities, but you can't decide on the best route. You want to complete the trip without wasting time driving back and forth. Here's how you could plan your trip in a better way:

Dynamic programming (often abbreviated as DP) is a method for solving complex problems by breaking them down into simpler, smaller parts. That sounds similar to so many other things, right? Well, let's start with an example.
Let's say you're planning a road trip across the country. You've got a list of cities you want to visit, but you can't decide on the best route. You want to visit all the cities without wasting too much time driving back and forth.
Here's how you could plan your trip in a better way:
Break the problem into smaller subproblems: The subproblems in this case are the routes from one city to another. You need to determine the time or distance between each pair of cities.
Solve each subproblem and store the solution: You can use a mapping app to find the shortest route (in time or distance) between each pair of cities. You then store this information for later use.
Use the solutions of the subproblems to solve the original problem: Now, using the information you've gathered and stored, you can construct the shortest possible route that visits all the cities. You start from your home city and then go to the nearest city. From there, you go to the nearest city that you haven't visited yet, and so on.
This is just a simple example of how dynamic programming works, but it gives you an idea of the process: breaking a complex problem down into simpler parts, solving each part, storing the solutions, and then combining the solutions to solve the original problem.
Table of Contents:
When to use dynamic programming
What is Dynamic Programming
Dynamic Programming (DP) is a method for solving complex problems by breaking them down into simpler, overlapping subproblems, solving each subproblem only once, and storing their solutions to avoid repetitive computations.
It's like tackling a giant jigsaw puzzle by piecing together smaller sections first.
But here's where DP gets really clever. It stores the solution to each of these smaller parts to avoid doing the same work over and over again(like you would store the shortest routes between cities in the first example). So, with dynamic programming, you're not just working harder, you're working smarter!
When to use Dynamic Programming?
So, how do you know when to use dynamic programming? There are two key hallmarks of a problem that's ripe for a DP approach:
- Overlapping Subproblems: This is when a problem can be divided into smaller problems which are used multiple times. Let’s go back to the road trip example. Imagine that the travel from City A to City B is common in several different routes. Now, instead of calculating the distance between the two cities every single time we’re mapping different routes, we can store the distance and reuse it whenever needed. This is an example of overlapping subproblems.
- Optimal Substructure: This is when an optimal solution can be constructed from the optimal solutions of its subproblems. Let's say we've figured out the shortest route from City A to City B, and the shortest route from City B to City C. The shortest route from City A to City C via City B would be the combination of these two shortest routes. So, by knowing the optimal solutions (shortest routes) to the subproblems, we can determine the optimal solution to the original problem. That's an optimal substructure.
Practical Application: The Fibonacci Sequence
To really get a grip on dynamic programming, let's explore a classic example: The Fibonacci sequence.
It is a series of numbers in which each number is the sum of the two preceding ones, usually starting with 0 and 1.
Fibonacci Series: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34…and so on.
Mathematically, we could write each term using the formula:
F(n) = F(n-1) + F(n-2),
With the base values F(0) = 0, and F(1) = 1. And we’ll follow the above relationship to calculate the other numbers. For example, F(6) is the sum of F(4) and F(5), which is equal to 8.
Let’s look at the diagram for better understanding.
Suppose we’ve to calculate F(10). Going by the formula, F(10) should be the sum of F(8) and F(9). Similarly, F(9) would also be the sum of the subproblems F(7) and F(8). As you can see, F(8) is an overlapping subproblem here.
In the above example, if we calculate the F(8) in the right subtree, then it would result in a increased usage of resources and reduce the overall performance.
The better solution would be to store the results of the already computed subproblems in an array. First, we’ll solve F(6) and F(7) which will give us the solution to F(8) and we’ll store that solution in an array and so on. Now when we calculate F(9), we already have the solutions to F(7) and F(8) stored in an array and we can just reuse them. F(10) can be solved using the solutions of F(8) and F(9), both of which are already stored in an array.
Similarly, at each iteration we store the solutions so we don’t have to solve them again and again. This is the main attribute of dynamic programming.
If you try to compute this sequence with a straightforward recursive function, you'll end up doing a lot of unnecessary work. (Want to understand recursion from scratch?)
Here's a simple Python implementation using DP to calculate a Fibonacci sequence:
def fibonacci(n):
# Create an array of size (n+1) to store the computed values
dp = [0, 1] + [0] * (n - 1)
for i in range(2, n + 1):
# Compute the ith Fibonacci number
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
print(fibonacci(10)) # Outputs: 55
Step-by-Step Approach to DP
Let's explore how to implement dynamic programming step-by-step:
- Grasp the Problem
- Find the Overlapping Subproblems
- Compute and Store Solutions
- Construct the Solution to the Main Problem
Types of Dynamic Programming
Dynamic programming is divided into two main approaches: top-down (memoization) and bottom-up (tabulation). Both of these methods help in solving complex problems more efficiently by storing and reusing solutions of overlapping subproblems, but they differ in the way they go about it.
Let's dive into these two approaches:
Top-Down DP (Memoization)
In the top-down approach, also known as memoization, we start with the original problem and break it down into subproblems. Think of it like starting at the top of a tree and working your way down to the leaves.
Here, we start solving the larger problem first but whenever we encounter smaller problems, we solve them and store their solutions. If we encounter the same subproblem again, instead of re-solving it, we retrieve and use the stored solution, hence saving computation time. The solutions to the subproblems are stored typically in an array or a hash-table for quick access.
Let's revisit the Fibonacci sequence example:
def fibonacci(n, memo = {}):
if n <= 2:
return 1
elif n in memo:
return memo[n]
else:
memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
return memo[n]
print(fibonacci(10)) # Outputs: 55
Here, memo is a dictionary that stores the previously computed numbers. Before we compute a new Fibonacci number, we first check if it's already in memo. If it is, we just return the stored value. If it's not, we compute it, store it in memo, and then return it.
Bottom-Up DP (Tabulation)
The bottom-up approach, also known as tabulation, takes the opposite direction. We start with the simplest subproblems and use their solutions to solve more complex subproblems, working our way up to the original problem.
Here, we fill up a table (hence the name "tabulation") in a manner that uses the previously filled values in the table. This way, by the time we come to the problem at hand, we already have the solutions to the subproblems we need.
Let's use the Fibonacci sequence again to illustrate the bottom-up approach:
def fibonacci(n):
fib_table = [0, 1] + [0]*(n-1)
for i in range(2, n+1):
fib_table[i] = fib_table[i-1] + fib_table[i-2]
return fib_table[n]
print(fibonacci(10)) # Outputs: 55
In this case, fib_table is an array that stores the Fibonacci numbers in order. We start by filling in the first two numbers (0 and 1), and then we iteratively compute the rest from these initial numbers.
Which approach to choose?
Both top-down and bottom-up dynamic programming can be useful, and your choice depends on the problem at hand and the specific requirements of your program.
The top-down approach might be easier to understand because it follows the natural logic of the problem, but it can involve a lot of recursion and may have a larger memory footprint due to the call stack.
On the other hand, the bottom-up approach can be more efficient because it avoids recursion and uses a loop instead, but it might require a better understanding of the problem to build the solution iteratively.
Final Thoughts
Dynamic programming is a little like magic: It turns a daunting problem into a series of manageable tasks, making the impossible possible. But unlike a magic trick, the method behind dynamic programming is logical and grounded in sound reasoning.
Sure, getting the hang of it might take some time. You'll need to practice spotting overlapping subproblems and constructing optimal solutions. But once you've mastered these skills, you'll be able to tackle a wide range of problems with newfound efficiency.
Whether you're a beginner or a professional developer, understanding DP is a valuable addition to your programming toolkit. Remember, the journey of a thousand miles begins with a single step – or in our case, a single subproblem.
Cheers and Happy Coding!
Learn more:
The Art of Debugging: Mastering the Bug Hunt, One Error at a Time
Introduction to Object-Oriented Programming
How Software is Developed? A Step-By-Step Guide