Kadane's Algorithm: The Complete Guide To Starting New

The maximum subarray problem is a perfect example of how a seemingly simple problem can require much deeper thinking than you'd first expect. Here's what we're dealing with: you have a sequence of numbers in an array, some positive, some negative. How do you find the contiguous segment that gives you the biggest possible sum?
Now, one of the most elegant solutions to this problem is Kadane's algorithm. It's beautifully efficient and takes advantage of the contiguity of the array to zero in on a solution efficiently.
Kadane's algorithm is a beautiful example of dynamic programming that follows the classic pattern: Solve a small part of the problem - Store the result - Reuse the result
What is the Maximum Subarray Problem?
To appreciate Joseph Kadane’s brilliance and understand the intuition behind his algorithm, we need to understand the problem it tries to solve.
The maximum subarray problem asks us to find the subarray within a one-dimensional array of numbers with the largest sum.
A subarray is a contiguous (unbroken) sequence of elements within an array.
Before exploring Kadane’s optimization, let’s walk through the simpler brute-force method. Hey, I know it’s a slow mess, but understanding slow messes can often illuminate how and why more optimized versions work.
To evaluate every possible subarray:
Initialize the maximum subarray sum to the first item in the list or negative infinity
(-'inf')
if you’re feeling fancy.max_sum = nums[0]
In case you’re wondering why max_sum is initialized to the first element (or negative infinity) rather than zero, think about it.
max(0, -2) = 0
If the array contains only negative values, initializing
max_sum
to zero would incorrectly return zero instead of the least negative value.The maximum value in a collection of negative values is still less than 0.
Loop through the array (outer loop with index i)
Initialize the sum in each outer iteration to 0
For each position, consider all subarrays starting at that position (inner loop with index j)
Track the maximum sum encountered.
Return the maximum score after completing all iterations.
def max_subarray_sum(nums):
max_sum = nums[0]
for i in range(len(nums)):
current_sum = 0
for j in range(i, len(nums)):
current_sum += nums[j]
max_sum = max(max_sum, current_sum)
return max_sum
This solution has a time complexity of O(n²): The outer loop runs n times, and for each iteration, the inner loop runs up to n times. I warned you, it’s a slow mess.
Kadane's Optimization
How does Kadane’s algorithm do better than O(n²)? The insight is brilliant yet simple, and what I find most compelling about this optimization is how it forces us to think deeper about the problem's structure.
If all the values in the array were positive, the best subarray would span the entire array. But negative values complicate things by potentially reducing our sum, so we need a smarter strategy.
Kadane's algorithm achieves linear time complexity through a key insight: we don't need to recalculate sums for every possible subarray. Instead, we can make a simple decision at each position, either extend the current subarray or start fresh.
As we scan through the array, we maintain two variables:
current_sum
: the maximum sum of a subarray ending at the current positionmax_sum
: the maximum sum found so far in any subarray
At each element, we face a choice:
Keep building on our existing subarray by adding the current element
Abandon our existing subarray and start a new one, beginning with the current element
This decision boils down to a single comparison: is the current element larger than the current element plus the running sum?
The key idea:
At every index i, we decide:
Should we extend the previous subarray ending at
i-1
?Or start a new subarray beginning at
i
?
We do this by comparing:
current_sum = max(nums[i], current_sum + nums[i])
Or more simply put:
current_sum += nums[i]
current_sum = max(nums[i], current_sum)
Then we update the global maximum:
max_sum = max(max_sum, current_sum)
A negative running sum will never contribute positively to a subarray moving forward.
For example, with the following arrays:
[600, -1000, 5000]
[600, -500, 5000]
In the first array, adding -1000
to 600
gives -400
, which would set the 5000 at index 2 back by 400 (-400 + 5000 = 4600)
. Having 4600 is worse than just starting a new subarray at 5000.
But in the second array, adding -500
to 600
still gives us 100
, which would give us 5100 if it’s added to 5000; therefore, it has a positive effect and is worth keeping.
The principle: If it cannot contribute positively to the sum, it's better to start fresh.
Walk with me
Input: nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
(Index 0): value = -2
Index 1: value = 1
current_sum = max(1, -2 + 1) = 1
max_sum = max(-2, 1) = 1
Index 2: value = -3
current_sum = max(-3, -3 + 1) = -2
max_sum = max(1, -2) = 1
Index 3: value = 4
current_sum = max(4, -2 + 4) = 4
max_sum = max(1, 4) = 4
Index 4: value = -1
current_sum = max(-1, 4 + (-1)) = 3
max_sum = max(4, 3) = 4
Index 5: value = 2
current_sum = max(2, 3 + 2) = 5
max_sum = max(4, 5) = 5
Index 6: value = 1
current_sum = max(1, 5 + 1) = 6
max_sum = max(5, 6) = 6
Index 7: value = -5
current_sum = max(-5, 6+(-5)) = 1
max_sum = max(6, 1) = 6
Index 8: value = 4
current_sum = max(4, 1 + 4) = 5
max_sum = max(6, 5) = 6
def max_subarray_sum(nums):
current_sum = max_sum = nums[0]
for i in range(1, len(nums)):
current_sum = max(nums[i], current_sum + nums[i])
max_sum = max(max_sum, current_sum)
return max_sum
Complexity
Here’s where Kadane’s algorithm shines. The brute force approach we looked at earlier has an O(n²) time complexity because we’re checking every possible subarray. With an array of size n, we’re doing roughly n² operations.
Kadane’s algorithm, on the other hand, runs in O(n) time. We only make one pass through the array, making a simple but smart decision at each element.
The space complexity is O(1) since we only need two extra variables to track our current sum and maximum sum, no matter how large the input size grows.
What I love most about Kadane's algorithm is how it transforms a seemingly complex problem into something beautifully simple.
The core insight: that a negative running sum can never help us moving forward is one of those "aha!" moments that makes you appreciate the elegance of good algorithm design. It's a perfect example of how understanding the structure of a problem can lead to dramatically more efficient solutions.
Kadane’s algorithm reminds me that moving forward with baggage isn’t always brave. It’s often just inefficient or unwise. Sometimes, the smartest path isn’t pushing harder, it’s starting over.
Subscribe to my newsletter
Read articles from Adesayo M Labaeka directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Adesayo M Labaeka
Adesayo M Labaeka
Hey! I’m a Python developer who fell in love with databases. I write about the wonderful world of algorithms, databases, and the occasional side project, usually with a dash of chaos.