Skip to main content

Dijkstra's algorithm - Part 3 - Tutorial

Here's Part 1 and Part 2

And now for the third and final part of the tutorial. We'll be implementing the actual algorithm here. But before we get started, let's go over some theory.

Dijkstra's Shortest Path Algorithm, (V - Number of Vertices, E - Number of Edges)

  • Applies to both Directed graph and Undirected graphs.
  • Greedy Algorithm. Always picks the local minimum to extend it's solution.
  • Uses Dynamic Programming. Stores solutions from source to every other vertex on the shortest path to the destination.
  • Edges cannot be negative.
  • Original Implementation by Dijkstra was O(V2) since it did not use a Minimum Priority Queue. Tarjan & Fredman introduced the modern version of the algorithm using a Fibonacci Heap to pick the vertex closest to the source. The Complexity thus became O(E + V logV). 
  • Using a Binary Heap instead of a Fibonacci Heap increases the complexity to  O( (E+V) log V).

Variations. There are two variants of the algorithm, there is a subtle difference between them. The first one, you'll traditionally find in textbooks requires your heap to implement the DECREASE-KEY function to change the priority of an arbitrary vertex. The second variant works by reinserting vertices into the priority queue. In this variant, the queue might have the same node with different distance to the source, which is wasteful of course. Without the function it might cost more in terms of space and performance. But for small graphs, both work more or less the same for Binary Heap/Binomial Heap. If we use a Fibonacci Heap, the first variant is more efficient. Here's a good discussion about the difference on these two variations.

Our pqueue implementation doesn't have the decrease-key method and is also implemented as a Binary Heap, so we'll be using the second variant.

The algorithm begins with a couple of auxiliary data structures which can be implemented by a vanilla dictionary (an array if the vertices are represented as positive integers).

The first is the distance, where distance[v] denotes the minimum distance from src to v.
The second is the previous (or parent), where previous[v] = u, denotes that the predecessor to v on the shortest from src  to v is u. This is required only if you're interested in the path from the source to the destination. Otherwise, the algorithm can work without it.

Initially, distance only has a single entry. distance[src] = 0, which is trivial. previous[src]  is populated  by a sentinel value (like NULL or -1). Since the path starts from src itself!

Here's my humble implementation of the algorithm in Python.

Let's break it down.

The first part is the initialization of the Algorithm, I simply add details about the source into the distance and previous dictionary. I proceed to then push a tuple into a minimum priority queue (Part 2). The priority is based on the distance of the vertex from the source. Initially, we're pushing the source itself, therefore (0, src) is our first vertex to be processed.

This is the main shebang of the algorithm. The while loop keeps processing entries in the priority queue until it's exhausted. With each iteration, new vertices are added to the queue when we find a better a path from source to that vertex. Note, initially we've assumed the distance is infinity effectively, therefore every reachable vertex will be processed at least once. We find the vertex which is closest to the source, say u, at that point of time. We iterate through the adjacent vertices, say v_i and find out if a transit route through u is less costly than a direct route from src to v_i. 

Is cost(src->u->v_i ) < cost(src->v_i)? 
This is essentially, cost(src->u) + cost(u->v_i) < cost(src->v_i)
If it is true for any v_i,  we "relax" that edge. Effectively, routing any transit to v_i through u. We update distance and previous for v_i and also push it into the priority queue for future iterations of the while loop. If we were using the first variant of the algorithm, we'd be using the decrease-key instead of pushing a new entry into the priority queue.

The final part is to simply backtrack the previous dictionary from the destination back the source. This will essentially give us the shortest path from source to destination in the reverse order.

I'll also post a mirror of the animation for the entire algorithm cycle from Wikipedia here. (incase it's ever taken down.)


Well, That's about it, Dijkstra's Shortest Path algorithm in Python. You'll find all the code relevant to the tutorial over here. Hope you've enjoyed the series and it helped you understand this beautifull algorithm better. Please leave any suggestions/improvements/corrections you have in the comment sections. Cheers!

Comments

Popular posts from this blog

Find Increasing Triplet Subsequence - Medium

Problem - Given an integer array A[1..n], find an instance of i,j,k where 0 < i < j < k <= n and A[i] < A[j] < A[k]. Let's start with the obvious solution, bruteforce every three element combination until we find a solution. Let's try to reduce this by searching left and right for each element, we search left for an element smaller than the current one, towards the right an element greater than the current one. When we find an element that has both such elements, we found the solution. The complexity of this solution is O(n^2). To reduce this even further, One can simply apply the longest increasing subsequence algorithm and break when we get an LIS of length 3. But the best algorithm that can find an LIS is O(nlogn) with O( n ) space . An O(nlogn) algorithm seems like an overkill! Can this be done in linear time? The Algorithm: We iterate over the array and keep track of two things. The minimum value iterated over (min) The minimum increa

Merge k-sorted lists - Medium

Problem - Given k-sorted lists, merge them into a single sorted list. A daft way of doing this would be to copy all the list into a new array and sorting the new array. ie O(n log(n)) The naive method would be to simply perform k-way merge similar to the auxiliary method in Merge Sort. But that is reduces the problem to a minimum selection from a list of k-elements. The Complexity of this algorithm is an abysmal O(nk). Here's how it looks in Python. We maintain an additional array called Index[1..k] to maintain the head of each list. We improve upon this by optimizing the minimum selection process by using a familiar data structure, the Heap! Using a MinHeap, we extract the minimum element from a list and then push the next element from the same list into the heap, until all the list get exhausted. This reduces the Time complexity to O(nlogk) since for each element we perform O(logk) operations on the heap. An important implementation detail is we need to keep track

Inversion Count - Medium

Given an Array A[1..n] of distinct integers find out how many instances are there where A[i] > A[j] where i < j. Such instances can be called Inversions. Say, [100,20,10,30] the inversions are (100,20), (100, 10), (100,30), (20,10) and the count is of course 4 The question appears as an exercise in the  Chapter 4 of Introduction to Algorithms .  (3rd Edition) As per our tradition we'll start with the worst algorithm and incrementally improve upon it. So on with it. So the complexity of the algorithm is O(n^2). We can do this even faster using divide and conquer, more aptly if we use a variation of Merge Sort . To do this, unfortunately, we will have to mutate the array. The key to the algorithm is to understand that in an ascending sorted array the inversion count is zero. Say [10,20,30] If we perhaps have an element 100 before the rest of the array, the inversion count becomes 3. [100,10,20,30]. If we placed the 100 at the second position the inversi