Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

In this post, we do an analysis the Bellman Ford's single source shortest path graph algorithm to find its computational Time and Space complexity for Best case, Worst case and Average Case.

Table of contents:

- Introduction to Bellman Ford Algorithm
- Time Complexity Analysis
- Worst Case Time Complexity
- Average Case Time Complexity
- Best Case Time Complexity

In Summary, Time & Space Complexity for Bellman Ford Algorithm:

- Worst Case Time Complexity: O(V
^{3}) - Average Case Time Complexity: O(E V)
- Best Case Time Complexity: O(E)
- Space Complexity: O(V)

where:

- V is number of vertices
- E is number of edges

# Introduction to Bellman Ford Algorithm

This algorithm is used to find the shortest path from source vertex to destination vertex such that the sum of the weights of its edges(path) should be the minimal. Another shortest path algorithm is dijkstra's algorithm.

Unlike dijkstra's algorithm which visits the neighboring vertex, bellman ford's algorithm visit every edge in the graph even if the edge has a negative weight.

The main advantage of bellman ford algorithm is its capability to handle negative weights. The drawback is that it cannot find a solution if there exists a negative weight cycle in the graph. A negative cycle reduces the path value by coming back to the same vertex. An example of a negative cycle is shown in the worst case complexity analysis.

This is a drawback but it is useful in cases where we need to know if a graph has a negative cycle, in such cases by we terminate the algorithm once we find a negative cycle.

Bellman ford's algorithm follows a dynamic programming approach whereby we divide problems into smaller sub-problems and reuse their solutions to solve the much larger problem.

You can read more on dynamic programming on the link at the end of this post.

How does bellman ford's algorithm solve the shortest path problem?

Given the graph below,

* Input:* A directed graph G(V, E) with no negative cycles.

*Set for all vertices reachable from s to u, min distance.*

**Output:**The bellman ford algorithm will look for the shortest path in the following steps.

#### Steps.

- Initialize the graph by choosing a starting vertex s (source) and assign infinity values to all other vertices.
- Visit each edge and relax the path values, repeat the relaxing of edges (|V| - 1) times.
- Check if there is a negative cycle and return an error.
- Return distance and previous vertex.

#### Algorithm.

```
bellmanFord(graph G, listEdges L, vertex S)
//step 1
for each vertex in graph
distance[V] <- infinite
previous[V] <- NULL
distance[S] <- 0
//step 2
for each vertex(V) in graph(G)
for each egde(U, V) in graph(G)
tempDistance <- distance[U] + edgeWeight(U, V)
if tempDistance <- distance[V]
distance[V] <- tempDistance
prev[V] <- U
//step 3
for each edge(U, V) in graph(G)
if distance[U] + edgeWeight(U, V) < distance[V]
Error: negative cycle exists
//step 4
return distance[], previous[]
```

Relaxation of edges happens (|V| - 1) times, |V| represents the number of vertices in the graph.

* Edge relaxation* is whereby if there is an edge between a pair of vertices u and v, then the shortest known path from s to u, d(u), can be extended to a path from s to v by adding edge (u, v) at the end. This path will have a distance d[u] + w[u, v]. If this distance is less than current d(v), replace current value of d(v) with this new lesser value. We repeat this process for each iteration until all values represent the cost of the shortest path from s to v.

s -> source.

d -> distance from source.

u, v -> pair of vertices.

c -> cost.

```
// Relaxation condition.
if(d[u] + c(u, v) < d[v])
d[v] = d[u] + c(u, v)
```

The iterations will be as follows based on the above graph.

* Edges* -> (0, 1), (0, 2), (1, 3), (2, 1), (2, 3), (2, 4), (3, 4), (3, 5), (4, 5)

Iterations |
vertices(0 - 5) |
---|---|

start | 0, inf, inf, inf, inf, inf |

1st Iteration | 0, 0, -2, 2, 1, 2 |

2nd Iteration | 0, 0, -2, 1, 0, 1 |

3rd Iteration | 0, 0, -2, 1, 0, 1 |

4th Iteration | 0, 0, -2, 1, 0, 1 |

5th Iteration | 0, 0, -2, 1, 0, 1 |

* 0, 0, -2, 1, 0, 1* is the final shortest path.

# Time Complexity Analysis

Generally, note the following:

- The complexity of this algorithm will depend on the number of
for all cases.**edge comparisons**

*step 2: if tempDistance < distance[V]*

depending on the graph and the order of visiting edges in the graph.**Edge relaxation varies**

The algorithm may need to go through all iterations while updating edges and in some cases the result is acquired in the first few iterations so no updates will take place.

*distance[V] = tempDistance*

# Worst Case Time Complexity

We can have a worst case scenario when we encounter a negative cycle in the graph.

An example of a negative cycle,

Secondly given the graph below.

Another scenario i, assuming we process the edges from right to left, we will do |V| - 1 comparisons and for larger graphs this number increases. The complexity is as follows.

O(|V| * |E|) [quadratic time complexity for the above case.]

Another case is whereby we are given a disjoint or disconnected graph. It will be impossible because the algorithm works on the basis of the adjacency of vertices in that, once done with the first graph component there is no edge that connects the other component(s) so the algorithm fails.

An example of a disjoint graph,

Finally, given a complete graph with edges between every pair of vertices and considering a case where we have found the shortest path in the first few iterations but still proceed with relaxation of edges, we would have to relax * |E| * (|E| - 1) / 2* edges,

*times.*

**(|V| - 1).**Time Complexity in case of a complete graph:

O(E V) = O(|V|^{2}) * O(|V|) = O(V^{3})

Space complexity:

O(V) [Linear space]

V being the number of vertices.

# Average Case Time Complexity

We can improve the worst case running time by terminating the algorithm when the iterations make no changes to the path values. This will lead to fewer iterations.

Another way to improve it is if a vertex V has a distance value that has not changed since the last relaxation we ignore it in the coming iterations thereby decreasing edges that need to be relaxed and increasing edges with correct values after each iteration. You can read more about this in the link at the end of this post.

Relaxation still happens |V| - 1 times, to |E| number of edges, therefore we multiply the two and get the average which is quadratic time complexity of O(E V).

Space complexity will be O(V), V being number of vertices in graph.

# Best Case Time Complexity

In the above graph if edge relaxation were to happen in the order from left to right then the algorithm would only have to do one relaxation iteration to find the shortest path hence making the time complexity of O(E) proportional to the number of edges in the graph.

Space complexity is O(V).

In Summary, Time & Space Complexity for Bellman Ford Algorithm:

- Worst Case Time Complexity: O(V
^{3}) - Average Case Time Complexity: O(E V)
- Best Case Time Complexity: O(E)
- Space Complexity: O(V)

where:

- V is number of vertices
- E is number of edges

#### Applications

- Checking for existence of negative weight cycles in a graph.
- Finding the shortest path in a graph with negative weights.
- Routing in data networks.

#### Questions

How would we apply bellman ford's algorithm to routing data in networks?

#### References:

Dynamic Programming

Graph Algorithms

Shortest Path Faster Algorithm