||Bah, heaps ...
REAL programmers use fibonacci heaps ;-p
(Well, of course not : too complicated to implement and to much overhead for any normal size of a problem. Still an interesting theoretical structure, its "decrease_key" has an amortized constant runtime, contrary to the O(log n) runtime for a normal heap)
BTW, using a heap for djikstra is only interesting for a sparse graphs, otherwise, a normal array is better :
Djikstra has n iteration, and every iteration
- may call decrease_key for each edge of the vertex
- will call extract_min once
decrease_key might be called for each edge once
extract_min will be called for each vertice
So that gives (with e : # of edges and v : # of vertices)
e * O(decrease_key) + v * O(extract_min)
So with a normal array : e * O(1) + v * O(v) = O(v^2)
With a fixed size heap : e * O(log(v)) + v * O(log(v)) = O(e*log(v))
With a fibonacci heap : e * O(1) + v * O(log(v)) = O(e + v*log(v))
So unless there a many fewer edges than possible, a normal array to keep the edges is better. Which is the definition of a sparse graph.
However, many graphs that model something that exists in reality are sparse. (E.g. there are millions of cities, but every city is only connected to a few douzands by road. There are billions of people but everyone only knows (at most) a few hundred others, etc ...)
And in the arena, I never needed a heap for djikstra.