LC 1207: Unique Number of Occurrences
LC 1207: Unique Number of Occurrences
Difficulty: Easy
Category: Array, Hash Table
Companies: Amazon, Google, Microsoft
Problem Statement
Given an array of integers arr, return true if the number of occurrences of each value in the array is unique, or false otherwise.
Examples
Example 1:
Input: arr = [1,2,2,1,1,3]
Output: true
Explanation: The value 1 has 3 occurrences, 2 has 2 occurrences, and 3 has 1 occurrence. No two values have the same number of occurrences.
Example 2:
Input: arr = [1,2]
Output: false
Explanation: The value 1 has 1 occurrence, and 2 has 1 occurrence. Two values have the same number of occurrences.
Example 3:
Input: arr = [-3,0,1,-3,1,1,1,-3,10,0]
Output: true
Explanation: The value -3 has 3 occurrences, 0 has 2 occurrences, 1 has 4 occurrences, and 10 has 1 occurrence. No two values have the same number of occurrences.
Constraints
1 <= arr.length <= 1000-1000 <= arr[i] <= 1000
Clarification Questions
Before diving into the solution, here are 5 important clarifications and assumptions to discuss during an interview:
-
Occurrence definition: What does “occurrence” mean? (Assumption: Number of times each value appears in the array)
-
Unique occurrences: What makes occurrences “unique”? (Assumption: Each occurrence count appears only once - no two values have the same count)
-
Return value: What should we return? (Assumption: Boolean - true if all occurrence counts are unique, false otherwise)
-
Empty array: What if array is empty? (Assumption: Return true - no occurrences means all are unique)
-
Single element: What if array has one element? (Assumption: Return true - one occurrence count is unique)
Interview Deduction Process (10 minutes)
Step 1: Brute-Force Approach (2 minutes)
Initial Thought: “I need to check if occurrence counts are unique. Let me count occurrences manually.”
Naive Solution: Count occurrences of each value manually, then check if all counts are unique by comparing all pairs.
Complexity: O(n²) time, O(n) space
Issues:
- O(n²) time - inefficient
- Manual comparison is error-prone
- Doesn’t leverage hash set
- Can be optimized
Step 2: Semi-Optimized Approach (3 minutes)
Insight: “I can use hash map to count occurrences, then check uniqueness.”
Improved Solution: Use hash map to count occurrences. Then check if all counts are unique by comparing counts or using another data structure.
Complexity: O(n) time, O(n) space
Improvements:
- Hash map enables efficient counting
- O(n) time is much better
- Still need to check uniqueness
- Can optimize uniqueness check
Step 3: Optimized Solution (5 minutes)
Final Optimization: “Use hash set to check uniqueness of counts.”
Best Solution: Use hash map to count occurrences, then use hash set to check if all counts are unique. If set size equals map size, all counts are unique.
Complexity: O(n) time, O(n) space
Key Realizations:
- Hash map for counting is standard
- Hash set for uniqueness check is elegant
- O(n) time is optimal - must process each element
- O(n) space is optimal for both structures
Solution Approaches
Approach 1: Hash Map + Hash Set (Recommended)
Algorithm:
- Count frequency of each element using hash map
- Store all frequencies in a hash set
- Check if hash set size equals hash map size (no duplicate frequencies)
Time Complexity: O(n)
Space Complexity: O(n)
class Solution {
public:
bool uniqueOccurrences(vector<int>& arr) {
unordered_map<int, int> freqs;
unordered_set<int> occurs;
for(int num: arr) freqs[num]++;
for(auto& [num, freq]: freqs)
occurs.insert(freq);
return occurs.size() == freqs.size();
}
};
Approach 2: Early Termination Optimization
Algorithm:
- Count frequency of each element using hash map
- While inserting frequencies into hash set, check for duplicates
- Return false immediately if duplicate frequency found
Time Complexity: O(n)
Space Complexity: O(n)
class Solution {
public:
bool uniqueOccurrences(vector<int>& arr) {
unordered_map<int, int> freqs;
unordered_set<int> occurs;
for(int num: arr) freqs[num]++;
for(auto& [_, freq]: freqs)
if(!occurs.insert(freq).second) return false;
return occurs.size() == freqs.size();
}
};
Approach 3: Array-Based Counting
Algorithm:
- Use array to count frequencies (since values are in range [-1000, 1000])
- Use another array to count frequency counts
- Check if any frequency count exceeds 1
Time Complexity: O(n)
Space Complexity: O(1) - fixed size arrays
class Solution {
public:
bool uniqueOccurrences(vector<int>& arr) {
int freq[2001] = {0}; // Offset by 1000 for negative numbers
int count[1001] = {0}; // Max frequency is 1000
// Count frequencies
for(int num : arr) {
freq[num + 1000]++;
}
// Count frequency counts
for(int i = 0; i < 2001; i++) {
if(freq[i] > 0) {
count[freq[i]]++;
if(count[freq[i]] > 1) return false;
}
}
return true;
}
};
Algorithm Analysis
Approach Comparison
| Approach | Time | Space | Pros | Cons |
|---|---|---|---|---|
| Hash Map + Set | O(n) | O(n) | Simple, readable | Extra space for set |
| Early Termination | O(n) | O(n) | Early exit optimization | Slightly more complex |
| Array Counting | O(n) | O(1) | Constant space | Limited to small ranges |
Key Insights
- Frequency Counting: Use hash map to count occurrences of each element
- Duplicate Detection: Use hash set to detect duplicate frequencies
- Size Comparison: If set size equals map size, all frequencies are unique
- Early Termination: Can optimize by checking for duplicates during insertion
Implementation Details
Hash Set Insert Behavior
// insert() returns pair<iterator, bool>
// second is true if insertion successful (no duplicate)
if(!occurs.insert(freq).second) return false;
Array Offset Technique
// Offset by 1000 to handle negative numbers
freq[num + 1000]++;
Edge Cases
- Single Element:
[1]→ true (frequency 1 is unique) - All Same Elements:
[1,1,1]→ true (frequency 3 is unique) - All Different Elements:
[1,2,3]→ true (all frequencies are 1) - Duplicate Frequencies:
[1,2,2,3]→ false (both 1 and 3 have frequency 1)
Follow-up Questions
- What if the array could contain very large numbers?
- How would you handle floating-point numbers?
- What if you needed to find which frequencies are duplicated?
- How would you optimize for very large arrays?
Related Problems
Optimization Techniques
- Early Termination: Stop as soon as duplicate frequency is found
- Space Optimization: Use arrays instead of hash maps for small ranges
- Memory Efficiency: Avoid storing unnecessary data
- Cache Performance: Array-based approach has better cache locality
Code Quality Notes
- Readability: First approach is most readable and maintainable
- Performance: Array approach is fastest for small ranges
- Scalability: Hash map approach works for any range
- Robustness: All approaches handle edge cases correctly
This problem demonstrates the importance of choosing the right data structure based on constraints and requirements, showing how different approaches can optimize for different aspects.