What is Binary Search Algorithm and How it Speeds Up Your Searches 10x!
By Rohit Sharma
Updated on Jun 26, 2025 | 28 min read | 248.56K+ views
Share:
For working professionals
For fresh graduates
More
By Rohit Sharma
Updated on Jun 26, 2025 | 28 min read | 248.56K+ views
Share:
Table of Contents
Did You Know? Binary search can find an item in a sorted list of 1,000,000 elements with just about 20 comparisons, while a linear search might require up to 1,000,000 comparisons for the same task. |
When dealing with huge lists of data, how quickly can you find what you need? Imagine having a sorted list of names or numbers and wanting to locate a specific one. If you scanned each entry one by one from the start, it could take a long time for large lists.
But there’s a faster method: binary search algorithm. This algorithm dramatically cuts down the search time by halving the search space at each step, giving it a much smaller time complexity of O(log n).
In this comprehensive guide, you’ll explore what is binary search algorithm and its complexities. You'll also learn related concepts like binary trees, advantages, limitations, and applications.
Binary search algorithm is based on the divide-and-conquer principle. It works by repeatedly dividing a sorted dataset in half to narrow down the possible location of a target value.
Unlike scanning sequentially through every element (as the linear search algorithm does), binary search jumps to the middle of the remaining search range and uses that midpoint to decide which half of the data to keep searching. This way, with each comparison, it discards half of the items from consideration, zooming in on the target very quickly.
In 2025, professionals who can use advanced algorithms to streamline business operations will be in high demand. If you're looking to develop skills in in-demand programming languages, here are some top-rated courses to help you get there:
Let’s break down the basic idea through an example.
Suppose you have a sorted list of numbers from 1 to 100 and are looking for 73. A linear scan might check 1, 2, 3, … and so on until 73, potentially doing 73 comparisons.
Binary search algorithm takes a smarter route:
In just a handful of steps (in this case, about 7 comparisons) you found 73, whereas a linear search might have taken 73 steps. This simple example highlights how binary search drastically reduces the number of checks by halving the search range each time.
Please Note: The array (or list) must be sorted for binary search to work – that’s the key requirement. If the data isn’t sorted, you can’t safely discard half the elements because the target could be anywhere in an unsorted list (We’ll talk later about what to do in unsorted cases and other limitations).
Also Read: Effortless Array Sorting in Python: A Step-by-Step Guide
The history of the binary search algorithm dates back to ancient times when humans were developing manual methods to search for specific elements in a sorted list. While the formal algorithmic description we know today emerged in the field of computer science, the fundamental concept has roots in various historical practices.
Here’s a timeline of how binary search evolved: a bonus section for anyone interested in learning how the algorithm developed and became the best version of itself.
1. Ancient Methods
The basic idea of binary search can be traced back to ancient methods of searching for elements in a sorted list. In ancient manuscripts or books, if someone was looking for a particular passage or information, they might start by opening the book in the middle.
Based on whether the target passage was before or after the midpoint, they would then eliminate half of the remaining pages and repeat the process until they found the desired information.
2. John Mauchly’s Early Use (1946)
The concept of binary search was formalized in the field of electronic computing during the mid-20th century. John Mauchly used a binary search algorithm in 1946. The ENIAC, one of the earliest electronic general-purpose computers, was programmed to perform a binary search on sorted punched cards.
3. Algorithmic Description by Derrick Henry Lehmer (1948)
The algorithmic description of binary search as we recognize it today is credited to Derrick Henry Lehmer, an American mathematician and computer scientist.
Lehmer published a paper in 1948 titled “Teaching an Electronic Computer to Play a Game,” where he described the binary search algorithm as part of a guessing game played on the SWAC (Standards Western Automatic Computer) computer.
4. Inclusion in Sorting and Searching Libraries
As computers evolved, binary search became a fundamental part of sorting and searching libraries. Its efficiency in quickly locating elements in a sorted dataset made it a staple in computer science and programming.
Sorting and searching algorithms, including binary search, played a crucial role in the development of early programming languages and paved the way for more sophisticated algorithms.
5. Algorithmic Analysis and Refinement
Over the years, researchers and computer scientists have analyzed the time and space complexity of the binary search algorithm, leading to a better understanding of its performance characteristics. Algorithmic refinements and adaptations have been proposed to address specific use cases and improve efficiency.
6. Integration into Standard Libraries and Programming Languages
As computing became more widespread, binary search found its way into standard libraries and programming languages. It became a foundational tool for developers working with sorted data structures, arrays, and other collections.
7. Continued Relevance
Despite its ancient roots, the binary search algorithm remains relevant in modern computer science and software development. Its logarithmic time complexity makes it particularly valuable for efficiently searching large datasets, and it continues to be taught in introductory computer science courses.
upGrad’s Exclusive Data Science Webinar for you –
Transformation & Opportunities in Analytics & Insights
The Binary Search Algorithm works by repeatedly dividing the search space in half to locate a target value in a sorted array. It starts with the entire array as the search range, calculates the midpoint, and compares the middle element with the target. If the middle element is not the target, the search range is halved based on whether the target is smaller or larger than the middle element. This step-by-step process continues until the target is found or the search range is exhausted.
To formalize the process, here are the typical steps of binary search on a sorted array:
Set two pointers or indices:.
These define the current interval of the array where the target might be.
Calculate the index mid, which is roughly the average of low and high (for example, mid = (low + high) // 2 in integer math). This mid index splits the current interval into two halves.
There are three possible outcomes. Let’s understand each of them:
With the narrowed interval (either the left half or right half from the previous step), go back to step 2. Compute a new midpoint and compare again.
The loop continues until either you find the target or the low pointer crosses the high pointer (meaning the interval is empty and the target isn’t in the array). If the target isn’t found by the time the interval is empty, you conclude the item is not present in the array.
In code or pseudocode form, binary search would look something like this:
function binarySearch(array, target):
low = 0
high = n - 1 (where n is length of array)
while low <= high:
mid = (low + high) // 2
if array[mid] == target:
return mid // found the target at index mid
else if array[mid] < target:
low = mid + 1 // target is in upper half
else:
high = mid - 1 // target is in lower half
end while
return -1 // target not found
This logic applies both to iterative implementations (using a loop) and recursive ones (where the function calls itself on a half of the array).
The essence remains the same:
Input:
array = [1, 3, 5, 7, 9, 11]
target = 7
print(binarySearch(array, target))
Output:
3 (index of target 7)
Also Read: Essential Guide to Data Structures & Algorithm in Python
Talking about binary search in the abstract is useful, but seeing it in action can cement understanding. Let's examine two implementations of binary search: one iterative and one recursive.
1. Iterative Binary Search in Python
In this example, we implement a binary search function in Python using a while loop (iterative approach).
def binary_search_rakesh(rakesh_list, target_value):
start_index = 0
end_index = len(rakesh_list) - 1
while start_index <= end_index:
mid_index = (start_index + end_index) // 2
mid_value = rakesh_list[mid_index]
if mid_value == target_value:
return mid_index # target found at mid_index
elif mid_value < target_value:
# Target is larger, ignore left half
start_index = mid_index + 1
else:
# Target is smaller, ignore right half
end_index = mid_index - 1
return -1 # target_value not found in rakesh_list
# Example usage:
numbers = [3, 8, 15, 16, 23, 42, 108] # sorted list
result = binary_search_rakesh(numbers, 16)
print(result) # Output: 3 (since 16 is at index 3 in the list)
Code Explanation:
The function binary_search_rakesh takes a sorted list and a target. It initializes start_index and end_index to the bounds of the list. Inside the loop, it calculates mid_index and retrieves the mid_value.
Then it compares mid_value with target_value:
This iterative method uses a constant amount of extra space and runs in O(log n) time as analyzed.
Also Read: Binary Search Algorithm in Python Explained in Detail
2. Recursive Binary Search in C++
Now, let’s look at a recursive implementation, this time in C++ for variety. We’ll implement a binary search that calls itself on subarrays.
#include <iostream>
using namespace std;
int binarySearchRecursive(int arr[], int amit_low, int pooja_high, int target) {
if (amit_low > pooja_high) {
return -1; // target not found
}
int mid = amit_low + (pooja_high - amit_low) / 2;
if (arr[mid] == target) {
return mid; // found the target
} else if (arr[mid] < target) {
// search in the right half
return binarySearchRecursive(arr, mid + 1, pooja_high, target);
} else {
// search in the left half
return binarySearchRecursive(arr, amit_low, mid - 1, target);
}
}
int main() {
int swati_numbers[] = {2, 5, 8, 12, 16, 23, 38};
int n = sizeof(swati_numbers)/sizeof(swati_numbers[0]);
int targetValue = 16;
int resultIndex = binarySearchRecursive(swati_numbers, 0, n-1, targetValue);
if(resultIndex != -1)
cout << "Found at index " << resultIndex << endl;
else
cout << "Not found" << endl;
return 0;
}
Code Explanation:
Here, binarySearchRecursive is a function that takes an array arr, a lower bound amit_low, an upper bound pooja_high, and the target value.
It checks the base case: if amit_low exceeds pooja_high, the range is empty, and the target isn’t found. Otherwise, it calculates mid as the average of amit_low and pooja_high.
Then it compares arr[mid] with target:
The main function demonstrates how to use this recursive search with an example array swati_numbers. If you run this code, it should output that 16 is found at some index (in the array given, 16 is at index 4, assuming 0-based indexing, so “Found at index 4”).
Both implementations, iterative and recursive, achieve the same result and follow the same divide-and-conquer logic. The difference lies in recursion vs looping and the slight space overhead in recursion.
Also Read: Linear Search in Python Program: All You Need to Know
Now that you’re familiar with what is binary search algorithm, let’s look at how it’s different than binary search algorithm.
When searching for an element in a collection, two of the most common algorithms you may encounter are Linear Search and Binary Search. Each of these algorithms has distinct characteristics, strengths, and weaknesses, which make them suitable for different situations.
In terms of performance, the difference is striking. Linear search can potentially require a large number of comparisons, especially with large datasets. On the other hand, binary search is highly efficient, especially for large sorted datasets, making it the preferred method when applicable.
To better visualize the differences between these two algorithms, here’s a side-by-side comparison:
Criteria |
Linear Search |
Binary Search |
Data Requirement | Works on unsorted or sorted data. | Requires sorted data for correctness. |
Method | Checks each element one by one in order. | Continuously splits the range in half, checking midpoints. |
Worst-Case Time | O(n) – may scan the entire list. | O(log n) – halves the search space each step. |
Best-Case Time | O(1) – if the target is the first element. | O(1) – if the target happens to be at the first midpoint. |
Comparison Count (example) | ~1,000,000 checks (if n=1e6, target last/not found). | ~20 checks (if n=1e6, target in worst case). |
Ease of Implementation | Very simple to implement. | Simple logic but requires careful handling of indices. |
Are you a full-stack developer wanting to integrate AI into your workflow? upGrad’s AI-Driven Full-Stack Development bootcamp can help you. You’ll learn how to build AI-powered software using OpenAI, GitHub Copilot, Bolt AI & more.
Also Read: Searching in Data Structure: Different Search Algorithms and Their Applications
Next, let’s look at the time complexity of the binary search algorithm.
One of the main reasons the binary search is taught in every Intro to Algorithms course is its impressively efficient time complexity. Time complexity measures how an algorithm's running time increases as the input size (n) grows.
For binary search, the running time grows very slowly compared to linear search. In fact, every time the input size doubles, the binary search only makes one additional comparison (because log₂(2n) = log₂(n) + 1).
Let’s break down the time complexity of binary search in three scenarios:
To give an intuition of how minor log n growth is:
Compare that to linear growth: A billion elements would require a billion steps linearly, but only ~30 steps with binary search – a proof of its efficiency.
Also Read: Difference Between Linear and Non-Linear Data Structures
Now, let’s discuss all the three time complexity scenarios in detail.
In the best-case scenario of binary search complexity in time, the target element is smack in the middle of the array on the first check.
For example, if your sorted array has 101 elements and your target happens to be the 51st element (the middle one), the binary search will check the middle, find it at once, and return the result.
Only 1 comparison is made. This scenario is independent of how large the array is; it’s just lucky placement. Therefore, the best-case time complexity is O(1) (constant time).
It’s worth noting that best-case doesn’t necessarily mean the element exists in the middle of a full array. If you run binary search on a range and the first mid happens to be the target, that’s a best-case event.
For instance, even if your array had 1,000,000 elements, if by chance the target was at the exact midpoint value, you’d find it in one step.
On average, binary search will still take logarithmic time. To understand why, imagine all possible positions where the target could be (including the possibility that it’s not present).
If the target’s position is uniformly random among these possibilities, you can calculate the expected number of comparisons. The math involves summing up comparisons over all cases and dividing by the number of cases.
Here’s a simpler intuitive explanation:
In the average case, you’ll stop when the search space is down to 1 element (either found or not).
Set n/(2^k) = 1, which gives 2^k = n, so k = log₂(n). This means roughly log₂(n) comparisons in the average case as well.
In plain terms, you can expect binary search to be very fast for large n, usually completing only a handful of steps, even if n is in the millions or billions.
The average-case doesn’t differ from the worst-case by more than a constant factor for binary search.
The worst-case for binary search happens when each time you split, the target is in the very last portion you check (or not in the list at all).
For instance, if the target is the first element in the array, binary search will compare to mid (go left), compare to mid of left half (go left again), and so on until it finally narrows to the first element.
Each comparison eliminated half the array, but you went through the maximum number of halving steps. The number of comparisons in the worst case will be the number of times you can halve n until one element remains.
Setting up the equation as earlier: after k comparisons, you have n/(2^k) elements left. In the worst case, you keep going until just 1 element remains in the search interval, and you still haven’t found the target in previous steps.
That gives n/(2^k) = 1 ⇒ 2^k = n ⇒ k = log₂(n).
So about log₂(n) comparisons are done, then one more comparison to either find the element or conclude it’s not there. You can say the comparisons’ count is on the order of log₂(n) (it might be ⌊log₂(n)⌋ + 1 in exact terms, but constants don’t matter for Big-O). Thus, The worst-case time complexity is O(log n).
To put it another way, every step of binary search gives you a yes/no answer that cuts the remaining possibilities in half.
If you visualize this as a decision tree, the longest path you might traverse in that decision tree is proportional to log₂(n). That’s the worst-case path (each decision discards half until none is left). Even in this worst scenario, binary search is dramatically faster than scanning everything.
One more perspective: For n = 1,048,576 (about 2^20), worst-case binary search would do at most 20 comparisons. For n = 1,099,511,627,776 (which is 2^40, over a trillion elements), worst-case would be at most 40 comparisons. That’s the power of logarithmic time.
Here’s a graphical representation of worst case scenarios of linear search and binary search:
As the chart above shows, binary search requires dramatically fewer comparisons than linear search, even as input size grows.
For example:
This helps highlight just how much more efficient binary search is — especially for large datasets — and why time complexity matters.
Also Read: Searching Algorithms for Large Dataset: Best Techniques
Next, let’s look at the space complexity of binary search algorithm.
After discussing binary search time complexity, it’s also relevant to touch on space complexity – the amount of extra memory an algorithm uses.
For binary search, space complexity depends on the implementation:
Aside from the recursion stack in the recursive approach, binary search does not allocate additional data structures proportional to n. We are using the existing array and just moving our pointers around, so binary search is very space-efficient.
In summary:
If space is a constraint, an iterative solution is preferred. If clarity is more important and recursion is acceptable, the slightly higher space overhead might be fine. Either way, the space requirement grows most logarithmically, which is quite modest.
Also Read: Time and Space Complexity in Data Structure: A Detailed Guide
Next, let’s look at the benefits and limitations of binary search algorithm.
The binary search algorithm is a powerful tool, renowned for its efficiency and speed when it comes to searching through sorted data.
By repeatedly halving the search space, it minimizes the number of comparisons required, making it exponentially faster than linear search, especially for large datasets. Its time complexity of O(log n) allows binary search to handle even massive datasets with ease, reducing the number of steps needed to locate an element.
However, while binary search is highly effective, it is not without its limitations. For it to work properly, the data must be sorted, and the algorithm might encounter challenges when dealing with dynamic or unsorted datasets, linked structures, or duplicate values.
Let’s explore the key benefits and limitations in a structured manner:
Benefits |
Limitations |
Binary search is incredibly fast, with a time complexity of O(log n). | Its performance is only advantageous on sorted datasets. |
It scales gracefully with larger datasets; doubling the size only adds one additional comparison. | In dynamic datasets with frequent insertions/deletions, maintaining sorted order becomes costly. |
The algorithm is easy to implement with simple logic. | Requires careful implementation; off-by-one errors and bugs are common. |
Its performance is deterministic, ensuring consistent execution time. | Works best on random-access data structures (like arrays), not linked lists. |
It is asymptotically optimal for comparison-based searches. | Doesn’t work well on unsorted data or if the data isn’t sorted in a way that matches the search criteria. |
Can be applied in various contexts, such as finding insertion points or thresholds. | Handling duplicates or edge cases (e.g., when the target is not found) requires additional steps. |
It requires minimal extra memory space, making it ideal for large data. | Not suitable for data that frequently changes or requires constant re-sorting. |
For large datasets, binary search is much faster than linear search. | For very small datasets, linear search might be just as fast due to low constant overhead. |
Also Read: Top 14 Most Common Data Mining Algorithms You Should Know
Next, let’s look at some of the real-world applications of binary search algorithm.
Binary search isn’t just a theoretical concept – it appears in many real-world computing scenarios, often in places you might not directly realize.
Here are some notable applications and analogies:
Also Read: Apriori Algorithm in Data Mining: Key Concepts, Applications, and Business Benefits in 2025
Next, let’s look at how upGrad can help you learn what is binary search algorithm and how to implement it for your projects.
Whenever you face a problem that involves searching or finding an element in sorted data, binary search should be one of the first approaches you consider. With a solid understanding of binary search complexity and proper usage, you’re well-equipped to write efficient search functionality and recognize situations where this classic algorithm can be applied.
You can further enhance your knowledge with upGrad’s comprehensive, hands-on courses in data structures and algorithms. With expert-led learning, practical coding challenges, and personalized feedback, you’ll gain a deep understanding of how to implement and optimize binary search for real-world applications.
In addition to the courses covered above, here are some additional programs to help you in your learning journey:
If you're unsure where to begin or which area to focus on, upGrad’s expert career counselors can guide you based on your goals. You can also visit a nearby upGrad offline center to explore course options, get hands-on experience, and speak directly with mentors!
Unlock the power of data with our popular Data Science courses, designed to make you proficient in analytics, machine learning, and big data!
Elevate your career by learning essential Data Science skills such as statistical modeling, big data processing, predictive analytics, and SQL!
Stay informed and inspired with our popular Data Science articles, offering expert insights, trends, and practical tips for aspiring data professionals!
Reference:
https://www.numberanalytics.com/blog/binary-search-ultimate-algorithm-guide#google_vignette
763 articles published
Rohit Sharma shares insights, skill building advice, and practical tips tailored for professionals aiming to achieve their career goals.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources