Skip to main content

Posts

Showing posts from June, 2013

3SUM - Hard

Problem - Given an Array of integers, A. Find out if there exists a triple (i,j,k) such that A[i] + A[j] + A[k] == 0. The 3SUM  problem is very similar to the 2SUM  problem in many aspects. The solutions I'll be discussing are also very similar. I highly recommend you read the previous post first, since I'll explain only the differences in the algorithm from the previous post. Let's begin, We start with the naive algorithm. An O(n^3) solution with 3 nested loops each checking if the sum of the triple is 0. Since O(n^3) is the higher order term, we can sort the array in O(nlogn) and add a guard at the nested loops to prune of parts of the arrays. But the complexity still remains O(n^3). The code is pretty simple and similar to the naive algorithm of 2SUM. Moving on, we'll do the same thing we did in 2SUM, replace the inner-most linear search with a binary search. The Complexity now drops to O(n^2 logn) Now, the hash table method, this is strictly not

2SUM - Medium

Problem : Given an array with distinct elements. Find out if there exists an instance where sum of two distinct elements is equal to a given constant K. Say, A = [1,2,3,4], K = 7. Output should be 3,4 The naive solution for this problem is pretty trivial. Brute force every pair-sum and check if the sum is K. You could be a little smart about this. You initially sort the array O(nlogn) and iterate the array pairwise and quit when the sum exceeds K, you can quit the inner loop since the array is sorted the sum can only increase. For the rest of the article, I'll be assuming K = 0, this doesn't affect the solutions in any way and it's trivial to introduce a new variable into the code without major changes. Here's the code, should be easy to follow... I'll be using Java today. Now that the naive solution is out of the way, let's try and do some clever stuff. So notice we can afford to sort the array since the brute solution was already O(n^2) (which

Longest Increasing Subsequence - Part 2 - Hard

As promised, this is the follow up to the first posted earlier that there was a more efficient algorithm for finding the longest increasing subsequence, better than the O(n^2) DP solution we came up with. It's not easy to understand, but once you get what the variables mean, it seems pretty straight forward. We'll be implementing another DP solution, similar to the previous one, except with an additional data structure that'll reduce the overall complexity to O(nlogn). The wikipedia seems no help. The explanation is contrived and not easy to visualize. Hope this post will clear a lot of doubts. So we need 3 arrays for solving an instance of LIS. M[i] - where M[i] is the minimum ending element of a increasing subsequence of length i. This is the key part of the new algorithm. Unlike the earlier solution where we kept the longest length discovered at index i , we store an Array which holds the minimum value of a subsequence of length i. I[i] - where M[i] = A[I[i]] si

Minimum Hops - Medium

Problem: Given an array of positive non-zero integers. Find out the minimum hops required to traverse the array, if the value at an index denotes the maximum length you can hop from that index. To perform a brute force solution, we can use a depth first search with pruning. This usually explodes with arrays of size greater than 50. But never the less, it's important to understand how it works. Following our dismal performance using DFS, we move on to the big guns, Dynamic Programming. We define minHop[i]   as the minimum hops you need from index i to reach the end of the array. Therefore, minHop[i] = 1 { if arr[i] + i > len(arr) } needs just one hop, trivial. minHop[i] = 1 + min ( minHop[i+1 : i+arr[i]] ) { the minimum hop from my range which will reach the end of the array } We start with i = len(arr) - 1 and count down to i  = 0 . At which point minHop[0] will be the minimum hops required to reach the end of the array from index = 0. So here we have an

Celebrity Problem - Medium

Problem : In a party of N people, only one person is known to everyone. Such a person may be present in the party, if yes, (s)he doesn't know anyone in the party.  We can only ask questions like “does A know B? Find the celebrity in minimum number of questions. This is a very interesting problem. The answer, ie the least number of questions required, changes based on what's given. Are we guaranteed there's one celebrity in the party? Does every non-celeb know every other non-celeb? ( not very ideal, but under a best case scenario. ) Once again, the rules There is at most one celebrity at the party Everyone knows the celebrity. The celebrity knows no one. We can only ask the question "Does A know B?" When we ask the question, we can get a "yes" or "no". Each conveying information about both the participants in the question. Let's see those outcomes... If the Answer is "Yes", we know A is definitely not a celebrit

Longest Increasing Subsequence - Hard

Problem - Given an array of integers. Find the longest increasing subsequence in the array. Eg. [42, 91, 46, 73, 75, 91, 95]. The LIS for this array is 6 which is [42,46,73,75,91,95]. Note: Subsequence does not require the elements to be contiguous As always, we begin with our naive solution which is a DFS/Bruteforce search trying every possible combination until we get the longest sequence. This is simply exhaustive and will explode on bigger arrays. Time Complexity : O(2^n) ~ Exponential Now, If you notice we repeatedly try a prefix subsequence repeatedly. This tells us, the problem is a good candidate for a dynamic programming algorithm. We define lis[0...L] , where L is the length of the array. We define lis[i] as the length of longest increasing subsequence that ends at i.  lis[0] = 1 { trivial since, the longest increasing sequence that ends at the start of the array contains just the first element, also true for the arr[i] where arr[i] is the smaller than preced

Matrix Rotation - Medium

Problem - Write an algorithm capable of rotating a matrix, in-place without any auxiliary storage. I don't have an elaborate thought process to develop this algorithm. It's a few rules you ought to remember to implement rotations in the matrix. To implement these rules, you need some helper functions which are quite simple really. You need to be able to transpose a matrix, reverse a row, reverse a column. That's it. The rest is just a combination of these 3 functions. Python is awesome, isn't it? Note : These functions create copies of the matrix, we can design algorithms that modify the original matrix with ease for square matrices. For non-square matrices, we have to create new matrices. Okay, let's get to the rotations. So we need to perform three kinds of rotations. 90,180,270. 1) Rotation by 90/-270  degrees Transpose the Matrix Reverse each row 2) Rotation by 180/-180 degrees There are two methods: First Method, is clearly obviou

Subset Sum Problem - Medium

Problem : Given a set of integers A, find out if there is a subset of A which add up to a particular integer K. Formally called the Subset Sum Problem , we are given an NP-Complete problem. To solve this, we have to exhaustively search every subset and verify if they add up to an integer K. Let's begin with a Brute force DFS search, we're going to construct subsets incrementally until the sum exceeds K. The elements are assumed sorted in the increasing order. So we perform a DFS trying to perform every combination of the elements in the set. The recursive function either will include an element in the sum or exclude it. The guard prunes off DFS paths that exceed the sum required. There is an efficient method for this, using Dynamic Programming. Essentially, we start with a single element, add the next element together to get the new sums possible. These new sums will be the input for the next iteration. I'll explain with an example, say we have {a,b,c} You