Skip to content
← Back to Blog

Greedy Algorithm Patterns: When Greedy Works (and When It Doesn't)

10 min read

Greedy algorithms are deceptively simple: at every step, make the locally optimal choice and hope it leads to a globally optimal solution. The catch? That "hope" only works when the problem has the right mathematical properties. This guide teaches you exactly when greedy works, when it fails (and you need dynamic programming instead), and the five classic greedy patterns that cover 90% of interview problems.

1. What Makes an Algorithm Greedy?

A greedy algorithm builds a solution incrementally, choosing the option that looks best right now without reconsidering earlier choices. Unlike dynamic programming, it never backtracks. The local optimal choice at each step must produce a global optimum — if it does, the algorithm is correct; if it doesn't, greedy gives a wrong answer.

Think of making change with US coins (25¢, 10¢, 5¢, 1¢): always pick the largest coin that fits. For 41¢ you get 25 + 10 + 5 + 1 = 4 coins — optimal. But with coins {1, 3, 4} and target 6, greedy picks 4 + 1 + 1 = 3 coins while the optimal answer is 3 + 3 = 2 coins. Same strategy, different coin set, wrong answer.

2. When Greedy Works

A problem is solvable by greedy when it has two properties:

  • Greedy Choice Property: A globally optimal solution can always be constructed by making locally optimal choices. You never need to "undo" a decision.
  • Optimal Substructure: After making a greedy choice, the remaining subproblem is a smaller instance of the same problem type. The optimal solution to the subproblem, combined with the greedy choice, yields an optimal solution to the original problem.

When both properties hold, greedy is faster and simpler than DP because it avoids exploring overlapping subproblems. The time complexity is usually O(n log n) due to sorting, followed by a single O(n) scan.

3. When Greedy Fails

Greedy fails when a local decision constrains future choices in ways that prevent reaching the global optimum. Common red flags:

  • Counting all possible ways: "How many distinct paths…" requires DP.
  • 0/1 choices with knapsack structure: Taking/skipping items with weight constraints needs DP.
  • Non-canonical coin systems: Coin Change with arbitrary denominations.
  • String/sequence construction: Edit Distance, Longest Common Subsequence.

For a deeper comparison, see our Greedy vs Dynamic Programming breakdown with side-by-side examples.

4. Five Classic Greedy Patterns

Pattern A: Activity / Interval Selection

Sort intervals by end time and greedily pick the earliest-ending interval that doesn't overlap the last chosen one. This maximizes the number of non-overlapping intervals (or equivalently, minimizes removals). Used in Non-overlapping Intervals (LC 435), Meeting Rooms, and Minimum Number of Arrows to Burst Balloons.

# Interval Selection Template
def max_non_overlapping(intervals):
    intervals.sort(key=lambda x: x[1])  # sort by end time
    count = 0
    end = float('-inf')

    for start_i, end_i in intervals:
        if start_i >= end:  # no overlap
            count += 1
            end = end_i

    return count

Pattern B: Fractional / Reach-Based Problems

Track the farthest reach or remaining resource. At each position, greedily extend your reach. If you can never reach the next position, the answer is false. Used in Jump Game (LC 55), Jump Game II (LC 45), and Gas Station (LC 134).

# Farthest-Reach Template
def can_jump(nums):
    farthest = 0
    for i in range(len(nums)):
        if i > farthest:
            return False
        farthest = max(farthest, i + nums[i])
    return True

Pattern C: Frequency-Based Scheduling (Huffman-Style)

When tasks or characters have frequencies and require spacing, schedule the most frequent item first. This minimizes idle time or total length. Used in Task Scheduler (LC 621), Reorganize String (LC 767), and Huffman Encoding.

# Frequency-First Scheduling
from collections import Counter

def least_interval(tasks, n):
    freq = list(Counter(tasks).values())
    max_freq = max(freq)
    max_count = freq.count(max_freq)

    # (max_freq - 1) full blocks of size (n + 1),
    # plus the last partial block with max_count tasks
    result = (max_freq - 1) * (n + 1) + max_count
    return max(result, len(tasks))

Pattern D: Exchange Argument Proofs

The exchange argument is a proof technique: assume an optimal solution that doesn't follow the greedy order, then show you can swap two adjacent elements to get an equal or better solution. This proves the greedy ordering is optimal. Classic example: minimizing total weighted completion time — sort jobs by weight/time ratio.

# Exchange Argument: Minimize weighted completion time
# Job i has weight w[i] and processing time p[i]
# Sort by w[i]/p[i] in decreasing order

def min_weighted_completion(jobs):
    # jobs = [(weight, processing_time), ...]
    jobs.sort(key=lambda j: j[0] / j[1], reverse=True)

    total_cost = 0
    current_time = 0
    for weight, proc_time in jobs:
        current_time += proc_time
        total_cost += weight * current_time

    return total_cost

Pattern E: Sorting + Greedy Scan

Many greedy problems reduce to: sort by a key, then scan left-to-right making optimal decisions. The sorting establishes an order that ensures the greedy choice is safe. Used in Assign Cookies, Boats to Save People, and Minimum Number of Platforms.

# Sorting + Two-Pointer Greedy
def num_rescue_boats(people, limit):
    people.sort()
    left, right = 0, len(people) - 1
    boats = 0

    while left <= right:
        if people[left] + people[right] <= limit:
            left += 1  # pair lightest with heaviest
        right -= 1
        boats += 1

    return boats

5. Step-by-Step LeetCode Solutions

Jump Game (LC 55)

Given an array where each element is the max jump length at that position, determine if you can reach the last index. Greedy insight: track the farthest index reachable. If any index i is beyond your farthest reach, you're stuck.

def canJump(nums: list[int]) -> bool:
    farthest = 0

    for i in range(len(nums)):
        if i > farthest:
            return False  # can't reach index i
        farthest = max(farthest, i + nums[i])
        if farthest >= len(nums) - 1:
            return True  # early exit

    return True

# Example: [2, 3, 1, 1, 4]
# i=0: farthest = max(0, 0+2) = 2
# i=1: farthest = max(2, 1+3) = 4 >= 4 → True

Time: O(n), Space: O(1). The greedy choice property holds because if you can reach index i, you can reach every index between 0 and i, so extending the farthest reach is always optimal.

Non-overlapping Intervals (LC 435)

Given a collection of intervals, find the minimum number of intervals to remove so the rest don't overlap. Greedy insight: sort by end time. Always keep the interval that ends earliest — this leaves the most room for future intervals.

def eraseOverlapIntervals(intervals: list[list[int]]) -> int:
    intervals.sort(key=lambda x: x[1])  # sort by end time
    removals = 0
    prev_end = float('-inf')

    for start, end in intervals:
        if start >= prev_end:
            prev_end = end  # keep this interval
        else:
            removals += 1   # remove (skip) this interval

    return removals

# Example: [[1,2],[2,3],[3,4],[1,3]]
# Sorted by end: [[1,2],[2,3],[1,3],[3,4]]
# Keep [1,2], keep [2,3], remove [1,3], keep [3,4]
# Answer: 1 removal

Time: O(n log n), Space: O(1). Sorting by end time (not start time) is critical — it ensures we never discard an interval that would have been a better choice.

Task Scheduler (LC 621)

Given tasks with a cooldown period n, find the minimum number of CPU intervals required. Greedy insight: the most frequent task determines the lower bound. Place the most frequent task first, creating (max_freq - 1) blocks of size (n + 1), then fill in the remaining tasks.

from collections import Counter

def leastInterval(tasks: list[str], n: int) -> int:
    freq = list(Counter(tasks).values())
    max_freq = max(freq)
    # how many tasks share the maximum frequency
    max_count = freq.count(max_freq)

    # Visualize: (max_freq - 1) full rows of width (n + 1)
    # plus a last partial row with max_count tasks
    #
    # Example: tasks = A A A B B B, n = 2
    # A B _ | A B _ | A B  → (3-1)*(2+1) + 2 = 8
    result = (max_freq - 1) * (n + 1) + max_count

    # If there are more tasks than idle slots, no idle needed
    return max(result, len(tasks))

# Example: tasks = ["A","A","A","B","B","B"], n = 2
# max_freq = 3, max_count = 2
# result = (3-1)*(2+1) + 2 = 8
# len(tasks) = 6, answer = max(8, 6) = 8

Time: O(n), Space: O(1). The key insight is that idle time is determined entirely by the most frequent task and the cooldown period — all other tasks just fill in the gaps.

6. Greedy vs DP Decision Framework

Use this mental checklist when deciding between greedy and DP:

QuestionGreedyDP
Can I make irrevocable choices?Yes — each choice is finalNo — need to compare subproblems
Does local best = global best?Yes — greedy choice propertyNo — need to explore all options
Overlapping subproblems?Doesn't matter (no recomputation)Yes — memoize to avoid re-solving
Asks for count of ways / all solutions?Rarely possibleNatural fit
Can you prove it with exchange argument?Yes — strong signal for greedyN/A

When in doubt, start with greedy. If you find a counterexample where the greedy choice leads to a suboptimal solution, switch to DP. For an interactive comparison, visit our Greedy vs DP comparison page.

7. Common Mistakes

  • Sorting by the wrong key: Intervals must be sorted by end time for selection problems, not start time. Sorting by start time gives a different (usually wrong) answer.
  • Assuming greedy without proof: Always verify with a counterexample or exchange argument. "It seems greedy" is not a proof.
  • Confusing fractional and 0/1 variants: Fractional Knapsack is greedy (sort by value/weight). 0/1 Knapsack requires DP — you can't take half an item.
  • Forgetting edge cases: Empty input, single element, all identical values, already sorted input. Greedy algorithms often have subtle off-by-one errors.
  • Not considering ties: When two choices look equally good, your tie-breaking rule must still preserve optimality. Document your tie-breaking strategy.

8. Practice Problems (10 LeetCode Greedy Essentials)

Start with the first five (core patterns), then move to the advanced five. Each problem maps to one of the patterns above.

#ProblemDifficultyPattern
55Jump GameMediumReach-Based
45Jump Game IIMediumReach-Based
435Non-overlapping IntervalsMediumInterval Selection
621Task SchedulerMediumFrequency-Based
881Boats to Save PeopleMediumSorting + Greedy
134Gas StationMediumReach-Based
452Minimum Number of ArrowsMediumInterval Selection
767Reorganize StringMediumFrequency-Based
1029Two City SchedulingMediumExchange Argument
135CandyHardTwo-Pass Greedy

Explore all greedy problems with visual animations on our Greedy Pattern page.

Related Articles

See greedy algorithms in action

Watch step-by-step animations showing exactly how greedy choices build optimal solutions — then compare with DP side by side.