HashingHash MapHash SetFrequency CountingO(1) Lookup

Hashing — The Complete Guide

Master hashing from first principles. Learn to recognize when a hash map is the right tool, build strong intuition for frequency counting and complement lookups, and confidently solve interview questions.

45 min read10 sections
01

Intuition from Scratch

Why does this pattern exist?

The fundamental problem hashing solves is search. Given a collection of data, how do you check if something exists — or find its associated value — without scanning everything? Arrays give you O(n) search. Sorted arrays give you O(log n) with binary search. But a hash map gives you O(1). Constant time. Regardless of whether you have 10 items or 10 million.

A hash function takes your key (a number, string, or any data) and converts it into an index in an array. That index is where the value lives. When you want to look it up later, you hash the key again, jump straight to that index, and grab the value. No scanning, no comparing — just one jump.

In coding interviews, hashing is the single most versatile pattern. It's the tool you reach for when you need to trade space for time — store something now so you can look it up instantly later. Two Sum, Group Anagrams, finding duplicates, frequency counting — all of these are hashing problems at their core.

Analogies to Build Intuition

📖

The Book Index

A book's index maps topics to page numbers. You don't read every page to find 'recursion' — you look it up in the index, get page 42, and flip straight there. A hash map is the same: it maps keys to values so you can jump directly to what you need.

🗄️

The Filing Cabinet

Imagine a filing cabinet where each drawer is labeled A-Z. To file a document for 'Smith', you go straight to the 'S' drawer. To retrieve it, you go to 'S' again. You never search every drawer. The label is the hash — it tells you exactly where to look.

🎒

The Coat Check

At a coat check, you hand over your coat and get a numbered ticket. When you come back, you give the ticket and get your coat instantly — the attendant doesn't search through all coats. The ticket number is the hash, and the coat rack is the hash table.

What kind of problems does it solve?

Hashing is your go-to when:

  • You need O(1) lookup — "have I seen this before?"
  • You need to count frequencies of elements
  • You need to find a complement or pair — "does target - current exist?"
  • You need to group items by some key (anagrams, patterns)
  • You need to preserve original indices while searching
  • You want to reduce O(n²) brute force to O(n) by trading space for time

The core insight

Hashing trades space for time. By storing data in a hash map, you convert O(n) searches into O(1) lookups. The cost is O(n) extra space — but in almost every interview problem, that trade is worth it. When you see a nested loop where the inner loop is "searching for something," hashing can almost always eliminate it.

02

Pattern Recognition

Recognizing when hashing is the right approach is the most important skill. Here are the signals to look for and the traps to avoid.

Keywords & Signals in the Problem Statement

Use Hashing when you see...

  • "Two Sum" — find a pair that adds to target
  • "Find duplicates" or "contains duplicate"
  • "Count frequency" of elements or characters
  • "Group anagrams" or group by some key
  • "First non-repeating character"
  • "Subarray sum equals k" (prefix sum + hash map)
  • "Longest consecutive sequence"
  • "Isomorphic strings" or "word pattern"
  • Need to preserve original indices while searching
  • O(n) time hint with lookup-heavy logic

Don't use Hashing when...

  • You need sorted order — hashing doesn't maintain order (use Sorting or BST)
  • You need the k-th smallest/largest — use Heap or Quickselect
  • The problem is about contiguous subarrays — Sliding Window may be simpler
  • You need range queries (min/max in a range) — use Segment Tree or Sparse Table
  • Space is extremely constrained — hashing uses O(n) extra space
  • You need to find closest neighbors — use Sorting + Binary Search

Hashing vs. Similar Patterns

PatternWhen to UseKey Difference
HashingNeed O(1) lookups, frequency counting, or complement searchO(n) time, O(n) space; no ordering, just fast lookups
SortingNeed ordered data for greedy, intervals, or enabling two pointersO(n log n) time, O(1) extra space; provides order but destroys indices
Two PointersSorted data + pair search, or in-place operationsRequires sorted input; O(1) space but O(n log n) if you need to sort first
Sliding WindowContiguous subarray/substring with constraintBoth pointers move right; often uses a hash map internally for window state
Prefix SumRange sum queries, subarray sum equals kOften combined with hashing — prefix sum values stored in a hash map

The hashing question to always ask

When you see a nested loop where the inner loop searches for something, ask: "Can I store this in a hash map and look it up in O(1)?" If yes, you just eliminated the inner loop and dropped from O(n²) to O(n). This single question solves a huge number of interview problems.

03

Brute Force → Optimized Thinking

Let's walk through the thinking evolution using the most classic hashing problem: Two Sum — given an array of integers and a target, return the indices of two numbers that add up to the target.

1

Brute Force — Check Every Pair

O(n²) time · O(1) space

For each element, scan every other element to see if they add up to the target. This works but the inner loop is doing a linear search — the exact thing hashing eliminates.

Brute Forcetypescript
function twoSumBrute(nums: number[], target: number): number[] {
  for (let i = 0; i < nums.length; i++) {
    for (let j = i + 1; j < nums.length; j++) {
      if (nums[i] + nums[j] === target) {
        return [i, j];
      }
    }
  }
  return [];
}
// Problem: The inner loop is searching for (target - nums[i]).
// That search is O(n) per element → O(n²) total.
2

Optimal — Hash Map Complement Lookup

O(n) time · O(n) space

The key insight: the inner loop is searching for a specific value (target - nums[i]). If we store every value we've seen in a hash map, we can check if the complement exists in O(1) instead of O(n).

Hash Map — Optimaltypescript
function twoSum(nums: number[], target: number): number[] {
  const seen = new Map<number, number>(); // value → index

  for (let i = 0; i < nums.length; i++) {
    const complement = target - nums[i];

    if (seen.has(complement)) {
      return [seen.get(complement)!, i];
    }

    seen.set(nums[i], i);
  }

  return [];
}
// Each element: O(1) to check map + O(1) to insert = O(1).
// Total: O(n) time, O(n) space.
// The hash map replaced the inner loop entirely.

Why Does the Optimization Work?

1

The inner loop is a search operation

In the brute force, the inner loop asks: 'Is there an element equal to target - nums[i] somewhere after index i?' That's a linear search — O(n) per query.

2

Hash maps turn search into lookup

By storing each element in a hash map as we go, we convert the search into a single O(1) lookup: seen.has(complement). The map remembers everything we've seen so far.

3

The general principle

Whenever you have a nested loop where the inner loop searches for a value, ask: 'Can I precompute or incrementally build a hash map to make this lookup O(1)?' If yes, you drop from O(n²) to O(n). This is the single most common optimization in coding interviews.

The thinking process matters more than the code

In interviews, walk through this progression: "The brute force checks every pair — O(n²). But the inner loop is just searching for a complement. If I store values in a hash map, I can look up the complement in O(1). Total: O(n) time, O(n) space." This shows the interviewer you understand WHY hashing helps, not just that it does.

04

Core Templates

Hashing problems follow a few recurring templates. Memorize these structures — most problems are variations of one of them.

Template 1: Complement Lookup (Two Sum Pattern)

Store elements as you iterate. For each new element, check if its complement (the value you need) already exists in the map. Used for pair-finding, target-sum, and difference problems.

Complement Lookup Templatetypescript
function complementLookup(arr: number[], target: number): Result {
  const map = new Map<number, number>(); // value → index (or count)

  for (let i = 0; i < arr.length; i++) {
    const complement = target - arr[i];

    if (map.has(complement)) {
      // Found a match — return indices, pair, etc.
      return [map.get(complement)!, i];
    }

    // Store current element for future lookups
    map.set(arr[i], i);
  }

  return notFound;
}
// When to use: Two Sum, pair with given difference,
// check if any two elements satisfy a condition

Template 2: Frequency Counter

Count occurrences of each element. Then use the frequency map to answer questions about duplicates, most/least frequent, or matching distributions.

Frequency Counter Templatetypescript
function frequencyCounter(arr: number[]): Map<number, number> {
  const freq = new Map<number, number>();

  for (const val of arr) {
    freq.set(val, (freq.get(val) ?? 0) + 1);
  }

  // Use the frequency map:
  // - Find duplicates: any value with freq > 1
  // - Find unique: any value with freq === 1
  // - Find most frequent: max value in the map
  // - Compare distributions: compare two frequency maps

  return freq;
}
// When to use: contains duplicate, first unique character,
// valid anagram, top K frequent, majority element,
// ransom note, sort by frequency

Template 3: Grouping by Key

Transform each element into a canonical key, then group elements that share the same key. The key design is the creative part — it encodes what makes elements "equivalent."

Grouping by Key Templatetypescript
function groupByKey(items: string[]): Map<string, string[]> {
  const groups = new Map<string, string[]>();

  for (const item of items) {
    // Generate a canonical key for this item
    const key = generateKey(item);

    if (!groups.has(key)) {
      groups.set(key, []);
    }
    groups.get(key)!.push(item);
  }

  return groups;
}

// Key design examples:
// Group Anagrams: key = sorted characters → "eat" → "aet"
// Group by pattern: key = character mapping → "abb" → "0.1.1"
// Group by frequency: key = frequency signature → "aab" → "a2b1"

// When to use: group anagrams, isomorphic strings,
// word pattern, find all duplicates

Template 4: Prefix Sum + Hash Map

Combine prefix sums with a hash map to find subarrays with a target sum. Store prefix sums as you go; check if (currentSum - target) exists in the map. This is one of the most powerful interview patterns.

Prefix Sum + Hash Map Templatetypescript
function subarraySumEqualsK(arr: number[], k: number): number {
  const prefixCount = new Map<number, number>();
  prefixCount.set(0, 1); // empty prefix has sum 0

  let currentSum = 0;
  let count = 0;

  for (const val of arr) {
    currentSum += val;

    // If (currentSum - k) was a previous prefix sum,
    // then the subarray between them sums to k
    if (prefixCount.has(currentSum - k)) {
      count += prefixCount.get(currentSum - k)!;
    }

    prefixCount.set(currentSum, (prefixCount.get(currentSum) ?? 0) + 1);
  }

  return count;
}
// When to use: subarray sum equals k, longest subarray with sum k,
// contiguous array (equal 0s and 1s), subarrays divisible by k

Which template to pick?

Ask yourself: (1) Am I looking for a pair/complement? → Template 1. (2) Do I need to count occurrences? → Template 2. (3) Do I need to group equivalent items? → Template 3. (4) Do I need subarray sums? → Template 4 (prefix sum + hash map).

05

Variations & Sub-Patterns

Hashing isn't a single trick — it's a family of techniques. Here are the most common variations and how the approach changes for each.

VariationHash Map StrategyClassic Example
Complement LookupStore values; check if complement existsTwo Sum, Two Sum II, Pair with Target Difference
Frequency CountingCount occurrences; query countsValid Anagram, Majority Element, Top K Frequent
Grouping / BucketingMap canonical key → list of itemsGroup Anagrams, Isomorphic Strings, Word Pattern
Prefix Sum + Hash MapStore prefix sums; find subarrays with target sumSubarray Sum Equals K, Contiguous Array
Hash Set for ExistenceStore seen elements; check membership in O(1)Contains Duplicate, Longest Consecutive Sequence, Happy Number
Index MappingMap value → index to preserve positionsTwo Sum (return indices), Intersection of Two Arrays II
Character MappingMap characters between two strings bijectivelyIsomorphic Strings, Word Pattern
Sliding Window + Hash MapHash map tracks window state (frequencies)Minimum Window Substring, Longest Substring Without Repeating

Problems mentioned above

Deep Dive: Group Anagrams — Key Design

The creative challenge in grouping problems is designing the right key. For anagrams, two strings are equivalent if they have the same characters in any order. The canonical key is the sorted string — all anagrams sort to the same key.

Group Anagrams — Sorted Keytypescript
function groupAnagrams(strs: string[]): string[][] {
  const groups = new Map<string, string[]>();

  for (const s of strs) {
    // Canonical key: sort the characters
    const key = s.split("").sort().join("");

    if (!groups.has(key)) {
      groups.set(key, []);
    }
    groups.get(key)!.push(s);
  }

  return Array.from(groups.values());
}
// "eat" → "aet", "tea" → "aet", "ate" → "aet" → all grouped together
// Time: O(n × k log k) where k = max string length (sorting each string)
// Space: O(n × k) for the map

// Alternative key (avoids sorting): frequency count string
// "eat" → "1a1e1t" or use a 26-length array as key
// This gives O(n × k) time instead of O(n × k log k)

Deep Dive: Longest Consecutive Sequence — Hash Set

Given an unsorted array, find the length of the longest consecutive element sequence. The trick: use a hash set and only start counting from sequence beginnings (elements where num - 1 doesn't exist).

Longest Consecutive Sequence — Hash Settypescript
function longestConsecutive(nums: number[]): number {
  const numSet = new Set(nums);
  let longest = 0;

  for (const num of numSet) {
    // Only start counting from the BEGINNING of a sequence
    if (!numSet.has(num - 1)) {
      let current = num;
      let streak = 1;

      while (numSet.has(current + 1)) {
        current++;
        streak++;
      }

      longest = Math.max(longest, streak);
    }
  }

  return longest;
}
// Time: O(n) — each element is visited at most twice
//   (once in the outer loop, once in a while loop)
// Space: O(n) for the set
// Key insight: the "if (!numSet.has(num - 1))" check ensures
// we only start counting from sequence beginnings, avoiding
// redundant work.

Deep Dive: Subarray Sum Equals K — Prefix Sum + Hash Map

This is one of the most important patterns to master. The idea: if prefixSum[j] - prefixSum[i] = k, then the subarray from i+1 to j sums to k. Store prefix sums in a hash map to find matching pairs in O(1).

Subarray Sum Equals K — Prefix Sum + Hash Maptypescript
function subarraySum(nums: number[], k: number): number {
  // Map: prefix sum → how many times this sum has occurred
  const prefixCount = new Map<number, number>();
  prefixCount.set(0, 1); // empty prefix (sum = 0) occurs once

  let currentSum = 0;
  let count = 0;

  for (const num of nums) {
    currentSum += num;

    // How many previous prefix sums equal (currentSum - k)?
    // Each one represents a subarray that sums to k
    const target = currentSum - k;
    if (prefixCount.has(target)) {
      count += prefixCount.get(target)!;
    }

    // Record current prefix sum
    prefixCount.set(currentSum, (prefixCount.get(currentSum) ?? 0) + 1);
  }

  return count;
}
// Time: O(n) — single pass with O(1) map operations
// Space: O(n) — storing prefix sums
// Why prefixCount.set(0, 1)? Because if currentSum itself equals k,
// the subarray from index 0 to current index is valid.
// We need (currentSum - k = 0) to be found in the map.
06

Visual Walkthrough

Let's trace through three classic hashing problems step by step to see the pattern in action.

Two Sum — Step by Step

Two Sum — Tracetext
Input: nums = [2, 7, 11, 15], target = 9
Map: valueindex

Step 1: i=0, nums[0]=2
  complement = 9 - 2 = 7
  map.has(7)? NO
  map.set(2, 0)         map = {2: 0}

Step 2: i=1, nums[1]=7
  complement = 9 - 7 = 2
  map.has(2)? YESindex 0
  Return [0, 1] ✓

Answer: [0, 1]

Key observations:
- We check BEFORE we insertthis prevents using the same element twice
- The map stores valueindex, so we can return indices directly
- We only need one passthe complement is always a previously seen element

Group Anagrams — Step by Step

Group Anagrams — Tracetext
Input: ["eat", "tea", "tan", "ate", "nat", "bat"]

Step 1: "eat"sort"aet"   groups = {"aet": ["eat"]}
Step 2: "tea"sort"aet"   groups = {"aet": ["eat", "tea"]}
Step 3: "tan"sort"ant"   groups = {"aet": ["eat", "tea"], "ant": ["tan"]}
Step 4: "ate"sort"aet"   groups = {"aet": ["eat", "tea", "ate"], "ant": ["tan"]}
Step 5: "nat"sort"ant"   groups = {"aet": ["eat", "tea", "ate"], "ant": ["tan", "nat"]}
Step 6: "bat"sort"abt"   groups = {..., "abt": ["bat"]}

Answer: [["eat", "tea", "ate"], ["tan", "nat"], ["bat"]]

Key: The sorted string is the canonical key.
All anagrams produce the same sorted keythey land in the same group.

Subarray Sum Equals K — Step by Step

Subarray Sum Equals K — Tracetext
Input: nums = [1, 2, 3, -2, 5], k = 3
prefixCount = {0: 1}  (empty prefix)
currentSum = 0, count = 0

Step 1: num=1, currentSum=1
  target = 1 - 3 = -2    prefixCount.has(-2)? NO
  prefixCount = {0:1, 1:1}

Step 2: num=2, currentSum=3
  target = 3 - 3 = 0     prefixCount.has(0)? YEScount += 1  (subarray [1,2,3]? No — [1,2])
  Waitsubarray from index 0 to 1 = [1,2] sums to 3
  count = 1
  prefixCount = {0:1, 1:1, 3:1}

Step 3: num=3, currentSum=6
  target = 6 - 3 = 3     prefixCount.has(3)? YEScount += 1
  Subarray: prefix 6 - prefix 3 = subarray [3] at index 2
  count = 2
  prefixCount = {0:1, 1:1, 3:1, 6:1}

Step 4: num=-2, currentSum=4
  target = 4 - 3 = 1     prefixCount.has(1)? YEScount += 1
  Subarray: prefix 4 - prefix 1 = subarray [2,3,-2] sums to 3
  count = 3
  prefixCount = {0:1, 1:1, 3:1, 6:1, 4:1}

Step 5: num=5, currentSum=9
  target = 9 - 3 = 6     prefixCount.has(6)? YEScount += 1
  Subarray: prefix 9 - prefix 6 = subarray [-2,5] sums to 3
  count = 4
  prefixCount = {0:1, 1:1, 3:1, 6:1, 4:1, 9:1}

Answer: 4

Key insight: prefixCount.set(0, 1) handles the case where
currentSum itself equals k (the subarray starts from index 0).
07

Practice Problems

These 7 problems are carefully ordered from easy to hard. Each one teaches a different aspect of hashing as a problem-solving tool. Solve them in order.

1

LC 1. Two Sum

Easy

Why this pattern:

Complement lookup. Store each value's index in a hash map. For each element, check if (target - current) exists in the map. The most fundamental hashing problem — if you can only solve one, solve this one.

Key idea: Iterate once. For each nums[i], check if map.has(target - nums[i]). If yes, return both indices. If no, store map.set(nums[i], i). Check before inserting to avoid using the same element twice.

2

LC 242. Valid Anagram

Easy

Why this pattern:

Frequency counting. Two strings are anagrams if and only if they have the same character frequencies. Build a frequency map for one string, then decrement for the other — all counts should reach zero.

Key idea: Count characters in string s (increment). Then iterate string t (decrement). If any count goes negative or strings have different lengths, return false. Alternatively, compare two frequency maps for equality.

3

LC 49. Group Anagrams

Medium

Why this pattern:

Grouping by canonical key. Sort each string to create a key — all anagrams produce the same sorted key. Group strings by their key in a hash map.

Key idea: For each string, compute key = str.split('').sort().join(''). Use this key to group strings in a Map<string, string[]>. Return all groups. Alternative: use a frequency-count string as the key for O(n × k) instead of O(n × k log k).

4

LC 128. Longest Consecutive Sequence

Medium

Why this pattern:

Hash set for O(1) existence checks. Put all numbers in a set. For each number, only start counting if (num - 1) is NOT in the set — this means it's the start of a sequence. Then count forward.

Key idea: The 'only start from sequence beginnings' trick ensures each element is visited at most twice (once in the outer loop, once in a while loop), giving O(n) total despite the nested loop appearance.

5

LC 560. Subarray Sum Equals K

Medium

Why this pattern:

Prefix sum + hash map. Store prefix sums and their counts. For each position, check if (currentSum - k) exists as a previous prefix sum — if so, the subarray between them sums to k.

Key idea: Initialize map with {0: 1} (empty prefix). For each element, update currentSum, check if map.has(currentSum - k), then store currentSum. The map counts how many ways each prefix sum has occurred.

6

LC 76. Minimum Window Substring

Hard

Why this pattern:

Sliding window with hash map state. Use a frequency map to track required characters. Expand the window until all characters are covered, then shrink to find the minimum. The hash map is the window state.

Key idea: Build a 'need' map from string t. Use a 'have' map for the current window. Track 'formed' (how many unique chars are fully satisfied). Expand right until formed === required, then shrink left while still valid.

7

LC 146. LRU Cache

Hard

Why this pattern:

Hash map + doubly linked list. The hash map gives O(1) key lookup. The linked list maintains access order — most recently used at the head, least recently used at the tail. This is a design problem that tests your understanding of hash map internals.

Key idea: Map stores key → node pointer. On get: move node to head. On put: if key exists, update and move to head. If new, add to head. If over capacity, remove tail node and delete its key from the map.

Practice strategy

For each problem: (1) Identify which hashing template applies before coding. (2) Decide what the key and value represent in your map. (3) After solving, write one sentence: "The hash map stores [key] → [value] because [reason]." This builds the intuition for designing hash maps in new problems.

08

Common Mistakes

These are the traps that catch people on hashing problems. Learn them here so you don't learn them in an interview.

🔄

Using the same element twice in Two Sum

You insert nums[i] into the map first, then check for the complement. If target = 2 × nums[i], you find the element itself and return the same index twice.

Always check the map BEFORE inserting the current element. This guarantees the complement comes from a different index. The order is: check → insert, not insert → check.

🔑

Wrong key design in grouping problems

You use a key that doesn't capture the right equivalence. For example, using the string itself instead of its sorted form for anagrams, or using a non-canonical representation.

Ask: 'What makes two items equivalent?' Then design a key that is identical for all equivalent items and different for non-equivalent ones. Test with 2-3 examples before coding.

0️⃣

Forgetting to initialize prefix sum map with {0: 1}

In prefix sum + hash map problems, you forget to seed the map with prefixCount.set(0, 1). This causes you to miss subarrays that start from index 0.

Always initialize with {0: 1}. This represents the empty prefix (sum = 0 before any elements). Without it, subarrays starting from the beginning of the array won't be counted.

📊

Not handling missing keys in the frequency map

You call map.get(key) without checking if the key exists, getting undefined. Or you forget to initialize counts to 0 before incrementing.

Use the pattern: map.set(key, (map.get(key) ?? 0) + 1) for safe incrementing. For lookups, use map.get(key) ?? 0 or check map.has(key) first.

🧹

Not cleaning up zero-count entries

When decrementing frequencies, a count reaches 0 but you leave the key in the map. This causes map.size to report more entries than actually exist, breaking logic that depends on distinct count.

When a frequency drops to 0, delete the key: if (map.get(key) === 0) map.delete(key). This keeps map.size accurate for problems that check distinct element counts.

Using objects as hash map keys in JavaScript

You try to use an array or object as a Map key, expecting structural equality. But JavaScript Maps use reference equality for objects — two different arrays with the same values are different keys.

Convert complex keys to strings: JSON.stringify(arr) or arr.join(','). For objects, create a canonical string representation. Only primitives (string, number) work as expected for Map keys.

09

Interview Strategy

Knowing when and how to use hashing is half the battle. Here's how to present it in an interview setting to maximize your score.

The 5-Step Interview Flow

1

Clarify & Spot the Search

'I need to find pairs that sum to target? Can there be duplicates? Do I return indices or values?' — Clarifying whether you need indices is critical: if yes, hashing is almost certainly the right approach (sorting destroys indices).

2

State the Brute Force

'The brute force uses nested loops — for each element, scan the rest for a match. That's O(n²). But the inner loop is just a search, and I can replace it with a hash map lookup in O(1).' — Name the optimization explicitly.

3

Design the Hash Map

'My map stores [value → index]. For each element, I check if the complement exists, then insert the current element.' — Be explicit about what the key and value represent. This is the most important design decision.

4

Code It Clean

Write the solution using the appropriate template. Name the map descriptively (seen, freq, prefixCount, groups). Add a comment explaining what the map stores. Keep the logic linear and simple.

5

Test & Analyze

Trace through a small example showing map insertions and lookups. State complexity: 'O(n) time because each element is processed once with O(1) map operations. O(n) space for the map.' Mention the space-time tradeoff.

Common Follow-Up Questions

Follow-UpHow to Handle
"Can you do it without extra space?"If you don't need indices: sort the array and use Two Pointers (O(n log n) time, O(1) space). If you need indices, O(n) space is unavoidable — explain the tradeoff.
"What about hash collisions?"In interviews, assume O(1) average-case for hash map operations. But mention: worst case is O(n) per operation if all keys hash to the same bucket. Good hash functions make this extremely unlikely.
"What if there are multiple valid answers?"Depends on the problem. For Two Sum (guaranteed one solution), return the first found. For 'find all pairs', collect results in a list. Clarify with the interviewer.
"Can you handle duplicates?"Use value → count (frequency map) instead of value → index. Or use value → list of indices if you need all positions. The key design changes based on what you need to track.
"What if the input is sorted?"If sorted, Two Pointers is more space-efficient (O(1) vs O(n)). Mention this: 'Since the input is sorted, I can use Two Pointers instead of a hash map to save space.'
"Extend Two Sum to Three Sum"Fix one element, then run Two Sum on the rest. Sort first to skip duplicates efficiently. Time: O(n²). This is a common follow-up — practice the transition.

The meta-skill

Interviewers love hashing problems because they test your ability to identify the hidden search operation and eliminate it. The progression from "I see a nested loop" to "the inner loop is a search" to "I can replace it with a hash map" is the exact thinking process they want to see. Practice verbalizing this transition.

10

Summary & Cheat Sheet

Pin this section. Review it before mock interviews and contests.

Quick Revision Cheat Sheet

Core idea: Trade space for time — store data in a hash map to convert O(n) searches into O(1) lookups

Complement lookup: Store values as you go; check if (target - current) exists. Classic Two Sum pattern

Frequency counting: map.set(key, (map.get(key) ?? 0) + 1) — count occurrences, then query counts

Grouping by key: Design a canonical key that's identical for equivalent items. Sort string for anagrams, pattern string for isomorphic

Prefix sum + map: Store prefix sums; check if (currentSum - k) exists. Initialize with {0: 1}. Handles negative numbers

Hash set: When you only need existence checks (no values). Contains Duplicate, Longest Consecutive Sequence, Happy Number

Check before insert: In complement problems, check the map BEFORE inserting current element to avoid using the same element twice

Key design: The creative part of grouping problems. Ask: 'What makes items equivalent?' Then encode that as a string key

Clean up zeros: Delete keys when count reaches 0 to keep map.size accurate for distinct-count logic

JS Map keys: Only primitives (string, number) work as expected. For arrays/objects, convert to string first

Complexity: O(n) time, O(n) space for most hashing solutions. Hash operations are O(1) average case

Mental trigger: "Find pair/complement" or "count frequency" or "group by property" or "need O(1) lookup" → Hashing

Decision Flowchart

When to Use Hashing — Decision Treetext
Does the problem involve finding a pair/complement?
├── YESComplement Lookup (Template 1)
Store values; check if (target - current) exists
└── NODo you need to count frequencies or find duplicates?
          ├── YESFrequency Counter (Template 2)
Count occurrences; query for most/least frequent
          └── NODo you need to group equivalent items?
                    ├── YESGrouping by Key (Template 3)
Design canonical key; map keylist of items
                    └── NODo you need subarray sums?
                              ├── YESPrefix Sum + Hash Map (Template 4)
Store prefix sums; check (currentSum - k)
                              └── NODo you just need existence checks?
                                        ├── YESHash Set
O(1) membership testing
                                        └── NOHashing probably isn't the primary pattern
Consider Sorting, Sliding Window, or DP

Hash Map vs. Hash Set — Quick Reference

FeatureHash Map (Map)Hash Set (Set)
StoresKey-value pairsKeys only (values are implicit)
Use whenYou need to associate data with each key (index, count, list)You only need to check if something exists
Common opsget, set, has, deleteadd, has, delete
Classic problemsTwo Sum, frequency count, prefix sumContains Duplicate, Longest Consecutive, cycle detection
SpaceO(n) for n entriesO(n) for n entries
Time per opO(1) averageO(1) average

One last thing

Hashing is the most frequently tested pattern in coding interviews. It appears in easy problems (Two Sum, Contains Duplicate) and hard ones (LRU Cache, Minimum Window Substring). The core skill isn't memorizing solutions — it's recognizing the hidden search operation in a brute force and replacing it with a hash map lookup. Master that reflex, and you'll unlock a massive category of problems.