Skip to main content

Dijkstra's algorithm - Part 1 - Tutorial


This will be a 3 Part series of posts where I will be implementing the Dijkstra's Shortest Path algorithm in Python. The three parts will be


1) Representing the Graph
2) Priority Queue
3) The Algorithm

To represent a graph we'll be using an Adjacency List. Another alternative is using an Adjacency Matrix, but for a sparse graph an Adjacency List is more efficient.

Adjacency List

An Adjacency List is usually represented as a HashTable (or an Array) where an entry at `u` contains a Linked List. The Linked List contains `v` and optionally another parameter `w`. Here `u` and `v` are node(or vertex) labels and `w` is the weight of the edge. By Traversing the linked list we obtain the immediate neighbours of `u`. Visually, it looks like this.


For implementing this in Python, we'll be using the dict() for the main HashTable. For the Linked List we can use a list of 2 sized tuples (v,w). 

Sidenote: Instead of a list of tuples, you can use a dict(), with the key as `v` and value as weight (or a tuple of edge properties). The  Advantage being you can check for adjacency faster. In Dijkstra, we don't need that check, so I'll stick with a list for now.
So here's the python code

The above program simply reads 3 integers per line from the data piped to it (or a file passed as an argument)

I'll be using the undirected graph from Wikipedia for reference. Here's the graph followed by its textual representation.



(Sorry about the overlapping edge labels, I'm new to Graphviz.)

1 2 7
1 3 9
1 6 14
2 3 10
2 4 15
3 4 11
3 6 2
4 5 6
5 6 9

So what have we accomplished so far? We essentially have stored the entire graph as a data structure, where we can iterate over the neighbours of any vertex using something like
That's it for this part. In the next part, we'll be building our very own Priority Queue with a little help from Python's Standard modules. Until next time...

Comments

Popular posts from this blog

Find Increasing Triplet Subsequence - Medium

Problem - Given an integer array A[1..n], find an instance of i,j,k where 0 < i < j < k <= n and A[i] < A[j] < A[k]. Let's start with the obvious solution, bruteforce every three element combination until we find a solution. Let's try to reduce this by searching left and right for each element, we search left for an element smaller than the current one, towards the right an element greater than the current one. When we find an element that has both such elements, we found the solution. The complexity of this solution is O(n^2). To reduce this even further, One can simply apply the longest increasing subsequence algorithm and break when we get an LIS of length 3. But the best algorithm that can find an LIS is O(nlogn) with O( n ) space . An O(nlogn) algorithm seems like an overkill! Can this be done in linear time? The Algorithm: We iterate over the array and keep track of two things. The minimum value iterated over (min) The minimum increa...

Merge k-sorted lists - Medium

Problem - Given k-sorted lists, merge them into a single sorted list. A daft way of doing this would be to copy all the list into a new array and sorting the new array. ie O(n log(n)) The naive method would be to simply perform k-way merge similar to the auxiliary method in Merge Sort. But that is reduces the problem to a minimum selection from a list of k-elements. The Complexity of this algorithm is an abysmal O(nk). Here's how it looks in Python. We maintain an additional array called Index[1..k] to maintain the head of each list. We improve upon this by optimizing the minimum selection process by using a familiar data structure, the Heap! Using a MinHeap, we extract the minimum element from a list and then push the next element from the same list into the heap, until all the list get exhausted. This reduces the Time complexity to O(nlogk) since for each element we perform O(logk) operations on the heap. An important implementation detail is we need to keep track ...

QuickSelect - Medium

Problem : Given an unsorted array, find the kth smallest element. The problem is called the Selection problem . It's been intensively studied and has a couple of very interesting algorithms that do the job. I'll be  describing an algorithm called QuickSelect . The algorithm derives its name from QuickSort. You will probably recognise that most of the code directly borrows from QuickSort. The only difference being there is a single recursive call rather than 2 in QuickSort. The naive solution is obvious, simply sort the array `O(nlogn)` and return the kth element. Infact, you can partially sort it and use the Selection sort to get the solution in `O(nk)` An interesting side effect of finding the kth smallest element is you end up finding the k smallest elements. This also effectively gives you (n - k) largest elements in the array as well. These elements are not in any particular order though. The version I'm using uses a random pivot selection, this part of the al...