Skip to main content

Posts

Showing posts with the label python

Inversion Count - Medium

Given an Array A[1..n] of distinct integers find out how many instances are there where A[i] > A[j] where i < j. Such instances can be called Inversions. Say, [100,20,10,30] the inversions are (100,20), (100, 10), (100,30), (20,10) and the count is of course 4 The question appears as an exercise in the  Chapter 4 of Introduction to Algorithms .  (3rd Edition) As per our tradition we'll start with the worst algorithm and incrementally improve upon it. So on with it. So the complexity of the algorithm is O(n^2). We can do this even faster using divide and conquer, more aptly if we use a variation of Merge Sort . To do this, unfortunately, we will have to mutate the array. The key to the algorithm is to understand that in an ascending sorted array the inversion count is zero. Say [10,20,30] If we perhaps have an element 100 before the rest of the array, the inversion count becomes 3. [100,10,20,30]. If we placed the 100 at the second position the in...

Dijkstra's algorithm - Part 3 - Tutorial

Here's  Part 1  and  Part 2 And now for the third and final part of the tutorial. We'll be implementing the actual algorithm here. But before we get started, let's go over some theory. Dijkstra's Shortest Path Algorithm, (V - Number of Vertices, E - Number of Edges) Applies to both Directed graph and Undirected graphs. Greedy Algorithm. Always picks the local minimum to extend it's solution. Uses Dynamic Programming. Stores solutions from source to every other vertex on the shortest path to the destination. Edges cannot be negative. Original Implementation by Dijkstra was O(V 2 ) since it did not use a Minimum Priority Queue. Tarjan & Fredman introduced the modern version of the algorithm using a Fibonacci Heap to pick the vertex closest to the source. The Complexity thus became O(E + V logV).  Using a Binary Heap instead of a Fibonacci Heap increases the complexity to  O( (E+V) log V). Variations. There are two variants of the algorithm, t...

Dijkstra's algorithm - Part 2 - Tutorial

For Part 2 of the series, we'll be developing a Priority Queue implemented using a Heap  . Continuing from Part 1 , we'll be using Python for the implementation. Fortunately, Python has a PriorityQueue class in the `queue` module. It's more than enough for our purposes, but the Queue module was designed thread safe, so there is a small overhead that comes with it. The Priority Queue is a Minimum Priority Queue by default, it takes in tuples which are compared together to determine their position in the queue. For those unfamiliar with tuple comparison in Python, it works something like this. >>> (1,2) < (3,4) True >>> (1,2) < (1,3) True >>> (1,2) < (1,1) False >>> (1,2) < (1,2,3) True >>> (1,2,3,4) < (1,2,3) False The first non equal element  of the both tuples determine the output of the comparison operation. So say you have a bunch of Tasks Ti with Priorities Pi, then we push (Pi, Ti) into the...

First gap in a Sorted Array - Easy

Problem : Given an array of integers in strictly increasing order (no duplicates), find the first number not present in the array. Eg. [11,12,13,25,37,49] The output should be 14. Seems simple enough? Let's go with our intuition first and use a naive algorithm. Time Complexity : O(n) Notice, if we don't have any gaps, say [11,12,13,14,15] then the upper bound of the array is equal to the highest - lowest element in the array. (4 = 15 - 11). So we can check if there is a gap simply by checking if high - low == arr[high] - arr[low] Let's be clever about this fact and solve this in O(logn). We use our trusty Binary Search but instead of "finding" an element we're going to partition the array into two halves. We first check if the First Half is "complete", if it is, we find the gap in the second half else we check the first half. We keep doing this till we have just 2 elements left (which is trivial to solve). Here's a recursive algorith...

Dijkstra's algorithm - Part 1 - Tutorial

This will be a 3 Part series of posts where I will be implementing the Dijkstra's Shortest Path algorithm in Python. The three parts will be 1) Representing the Graph 2) Priority Queue 3) The Algorithm To represent a graph we'll be using an  Adjacency List . Another alternative is using an Adjacency Matrix, but for a sparse graph an Adjacency List is more efficient. Adjacency List An Adjacency List is usually represented as a HashTable (or an Array) where an entry at `u` contains a Linked List. The Linked List contains `v` and optionally another parameter `w`. Here `u` and `v` are node(or vertex) labels and `w` is the weight of the edge. By Traversing the linked list we obtain the immediate neighbours of `u`. Visually, it looks like this. For implementing this in Python, we'll be using the dict()  for the main HashTable. For the Linked List we can use a list of 2 sized tuples (v,w).  Sidenote: Instead of a list of tuples, you can use a dict(), ...