Skip to main content

Minimum Hops - Medium

Problem: Given an array of positive non-zero integers. Find out the minimum hops required to traverse the array, if the value at an index denotes the maximum length you can hop from that index.


To perform a brute force solution, we can use a depth first search with pruning. This usually explodes with arrays of size greater than 50. But never the less, it's important to understand how it works.

def minhops_dfs(lst):
sz = len(lst)
min_hop = [0] * sz
def dfs(index,s,hops,min_hop):
if index >= sz:
if len(min_hop) > s:
min_hop[:] = hops
#min_hop.extend(hops)
elif s > len(min_hop):
pass
else:
hops.append(lst[index])
for i in xrange(1,lst[index] + 1):
dfs(index + i, s + 1,hops,min_hop)
hops.pop()
dfs(0,0,[],min_hop)
return len(min_hop)
view raw minhop dfs hosted with ❤ by GitHub
Following our dismal performance using DFS, we move on to the big guns, Dynamic Programming. We define minHop[i]  as the minimum hops you need from index i to reach the end of the array.

Therefore,

minHop[i] = 1 { if arr[i] + i > len(arr) } needs just one hop, trivial.
minHop[i] = 1 + min ( minHop[i+1 : i+arr[i]] ) { the minimum hop from my range which will reach the end of the array }

We start with i = len(arr) - 1 and count down to i  = 0 . At which point minHop[0] will be the minimum hops required to reach the end of the array from index = 0.
import random
def random_array(sz,low=10,high = 100):
return map(lambda i:random.randint(low,high),xrange(sz))
R = random_array(50,1,10)
print R
def minhops_dfs(lst):
sz = len(lst)
min_hop = [0] * sz
def dfs(index,s,hops,min_hop):
if index >= sz:
if len(min_hop) > s:
min_hop[:] = hops
#min_hop.extend(hops)
elif s > len(min_hop):
pass
else:
hops.append(lst[index])
for i in xrange(1,lst[index] + 1):
dfs(index + i, s + 1,hops,min_hop)
hops.pop()
dfs(0,0,[],min_hop)
return len(min_hop)
def min_hops_dp(lst):
sz = len(lst)
min_hops = [0] * sz
for l in xrange(sz - 1,-1,-1):
if lst[l] + l >= sz:
min_hops[l] = 1
else:
end = min(lst[l] + l,sz)
min_hops[l] = 1 + min(min_hops[l+1:end+1])
return min_hops[0]
print minhops_dfs(R)
print min_hops_dp(R)
view raw Minimum Hops hosted with ❤ by GitHub
So here we have an O(n^2) solution, at the expense of O(n) space. Until Next time.

Comments

Popular posts from this blog

Find Increasing Triplet Subsequence - Medium

Problem - Given an integer array A[1..n], find an instance of i,j,k where 0 < i < j < k <= n and A[i] < A[j] < A[k]. Let's start with the obvious solution, bruteforce every three element combination until we find a solution. Let's try to reduce this by searching left and right for each element, we search left for an element smaller than the current one, towards the right an element greater than the current one. When we find an element that has both such elements, we found the solution. The complexity of this solution is O(n^2). To reduce this even further, One can simply apply the longest increasing subsequence algorithm and break when we get an LIS of length 3. But the best algorithm that can find an LIS is O(nlogn) with O( n ) space . An O(nlogn) algorithm seems like an overkill! Can this be done in linear time? The Algorithm: We iterate over the array and keep track of two things. The minimum value iterated over (min) The minimum increa...

Merge k-sorted lists - Medium

Problem - Given k-sorted lists, merge them into a single sorted list. A daft way of doing this would be to copy all the list into a new array and sorting the new array. ie O(n log(n)) The naive method would be to simply perform k-way merge similar to the auxiliary method in Merge Sort. But that is reduces the problem to a minimum selection from a list of k-elements. The Complexity of this algorithm is an abysmal O(nk). Here's how it looks in Python. We maintain an additional array called Index[1..k] to maintain the head of each list. We improve upon this by optimizing the minimum selection process by using a familiar data structure, the Heap! Using a MinHeap, we extract the minimum element from a list and then push the next element from the same list into the heap, until all the list get exhausted. This reduces the Time complexity to O(nlogk) since for each element we perform O(logk) operations on the heap. An important implementation detail is we need to keep track ...

3SUM - Hard

Problem - Given an Array of integers, A. Find out if there exists a triple (i,j,k) such that A[i] + A[j] + A[k] == 0. The 3SUM  problem is very similar to the 2SUM  problem in many aspects. The solutions I'll be discussing are also very similar. I highly recommend you read the previous post first, since I'll explain only the differences in the algorithm from the previous post. Let's begin, We start with the naive algorithm. An O(n^3) solution with 3 nested loops each checking if the sum of the triple is 0. Since O(n^3) is the higher order term, we can sort the array in O(nlogn) and add a guard at the nested loops to prune of parts of the arrays. But the complexity still remains O(n^3). The code is pretty simple and similar to the naive algorithm of 2SUM. Moving on, we'll do the same thing we did in 2SUM, replace the inner-most linear search with a binary search. The Complexity now drops to O(n^2 logn) Now, the hash table method, this is strictly not ...