Skip to main content

2SUM - Medium


Problem : Given an array with distinct elements. Find out if there exists an instance where sum of two distinct elements is equal to a given constant K. Say, A = [1,2,3,4], K = 7. Output should be 3,4
The naive solution for this problem is pretty trivial. Brute force every pair-sum and check if the sum is K. You could be a little smart about this. You initially sort the array O(nlogn) and iterate the array pairwise and quit when the sum exceeds K, you can quit the inner loop since the array is sorted the sum can only increase.

For the rest of the article, I'll be assuming K = 0, this doesn't affect the solutions in any way and it's trivial to introduce a new variable into the code without major changes.

Here's the code, should be easy to follow... I'll be using Java today.


Now that the naive solution is out of the way, let's try and do some clever stuff. So notice we can afford to sort the array since the brute solution was already O(n^2) (which is the higher order term). Now a very big advantage of a sorted array is we can search it in O(logn), so instead of the inner loop doing a linear search for (K-A[i]), we perform a Binary Search. The solution now drops to O(nlogn).

Now the obvious question is, Can this be done linearly? The answer is yes, but at the cost of some space. We introduce a hash set h, while iterating we check if the hash set contains the value (K - A[i]). If it does, we return A[i] and K - A[i]. If it isn't present, we add A[i] into the set and move to the next element. What we've essentially done is replace the O(logn) search of the previous algorithm with a O(1) membership check. The Time Complexity is now O(n) and Space Complexity is also O(n).

The Code as follows.
If for some special case, we are already given a sorted array, then we can still solve this in linear time. The algorithm is pretty simple. I'll explain with an example. The code is self explanatory.

We maintain two indices left and right. The left one points to the beginning of the array (ie the minimum element) and the right index is equal to the last element (ie the maximum element). Now we calculate S where S = A[left] + A[right]. There are 3 cases.


  1. S == K, which is the best case scenario. We found the pair (A[left], A[right]) whose sum = K
  2. S > K, Now if the sum is greater than K, then it means the right element is more than it should be. We want to reduce S, if we increment left, S will increase which doesn't help. If we decrement right, the sum will decrease. Note, we don't decrement left, the part of the array before left has already been processed to not contain the solution.
  3. If S < K, as you can imagine, since S is lower than the required sum, we need to "add" more value to it, hence we increment left. 
The process repeats till left < right, at which point you can conclude there is no solution.

Let's take a short example, say the array is [-10,-5,-2,0,1,2,3], K = 0


  1. left = 0, right = 6, A[left] = -10, A[right] = 3, S = -7, Since S < 0, we increment left
  2. left = 1, right = 6, A[left] = -5, A[right] = 3, S= -2, Since S < 0, we increment left
  3. left = 2, right = 6, A[left] = -2, A[right] = 3, S = 1, Since S > 0, we decrement right
  4. left = 2, right = 5, A[left] = -2, A[right] = 2, S = 0, we found the solution.
Hopefully, that helps. Here's the code that implements the above. Again, this only works with a sorted array. The Complexity is Linear since at each iteration one element is eliminated.



You can find the entire source to the program here

Comments

Popular posts from this blog

Find Increasing Triplet Subsequence - Medium

Problem - Given an integer array A[1..n], find an instance of i,j,k where 0 < i < j < k <= n and A[i] < A[j] < A[k]. Let's start with the obvious solution, bruteforce every three element combination until we find a solution. Let's try to reduce this by searching left and right for each element, we search left for an element smaller than the current one, towards the right an element greater than the current one. When we find an element that has both such elements, we found the solution. The complexity of this solution is O(n^2). To reduce this even further, One can simply apply the longest increasing subsequence algorithm and break when we get an LIS of length 3. But the best algorithm that can find an LIS is O(nlogn) with O( n ) space . An O(nlogn) algorithm seems like an overkill! Can this be done in linear time? The Algorithm: We iterate over the array and keep track of two things. The minimum value iterated over (min) The minimum increa

Merge k-sorted lists - Medium

Problem - Given k-sorted lists, merge them into a single sorted list. A daft way of doing this would be to copy all the list into a new array and sorting the new array. ie O(n log(n)) The naive method would be to simply perform k-way merge similar to the auxiliary method in Merge Sort. But that is reduces the problem to a minimum selection from a list of k-elements. The Complexity of this algorithm is an abysmal O(nk). Here's how it looks in Python. We maintain an additional array called Index[1..k] to maintain the head of each list. We improve upon this by optimizing the minimum selection process by using a familiar data structure, the Heap! Using a MinHeap, we extract the minimum element from a list and then push the next element from the same list into the heap, until all the list get exhausted. This reduces the Time complexity to O(nlogk) since for each element we perform O(logk) operations on the heap. An important implementation detail is we need to keep track

Inversion Count - Medium

Given an Array A[1..n] of distinct integers find out how many instances are there where A[i] > A[j] where i < j. Such instances can be called Inversions. Say, [100,20,10,30] the inversions are (100,20), (100, 10), (100,30), (20,10) and the count is of course 4 The question appears as an exercise in the  Chapter 4 of Introduction to Algorithms .  (3rd Edition) As per our tradition we'll start with the worst algorithm and incrementally improve upon it. So on with it. So the complexity of the algorithm is O(n^2). We can do this even faster using divide and conquer, more aptly if we use a variation of Merge Sort . To do this, unfortunately, we will have to mutate the array. The key to the algorithm is to understand that in an ascending sorted array the inversion count is zero. Say [10,20,30] If we perhaps have an element 100 before the rest of the array, the inversion count becomes 3. [100,10,20,30]. If we placed the 100 at the second position the inversi