My textbook says that the Dijkstra algorithm's runtime is $O(n) + O(m \log(n)) = O((n+m) \log(n))$. How did they come up with that?
Dijkstra algorithm pseudocode:
1. Dijkstra(V, E, s):
2. Create an empty priority queue
3. for each v ≠ s:
4. d(v) ← ∞
5. d(s) ← 0
6. for each v ∈ V:
7. insert v with key d(v) into priority queue
8. while (the priority queue is not empty):
9. u ← delete-min from priority queue
10. For each edge (u,v) ∈ E leaving u:
11. If d(v) > d(u) + l(u,v):
12. decrease-key of v to d(u) + l(u,v) in priority queue
13. d(v) ← d(u) + l(u,v)
My runtime analysis:
1.
2. O(1)
3. O(n)
4. O(1)
5. O(1)
6. O(n)
7. O(log(n))
8. O(n)
9. O(log(n))
10. O(m)
11. O(1)
12. O(log(n))
13. O(1)
From my analysis, the largest runtime would be from lines 8, 10, 11, and 12, resulting in $O(nm\log(n)) \ne O((n+m) \log(n))$.