JOIN
Get Time
forums   
Search | Watch Thread  |  My Post History  |  My Watches  |  User Settings
View: Flat (newest first)  | Threaded  | Tree
Previous Thread  |  Next Thread
dijskra: is there something missing in the article? | Reply
Here is how I interpret the dijktra algorithm in the article (see below):

Add the start node neighbors, then pick a high priority node from those, add its neighbours to the same heap,result: you have mixed the contents of the previous node with current one so that if the heap is remade it is possible the high priority node(top) wont be the one below the current node.
Can someone clarify this, that my line of thinking. The code from the article is given below.


void dijkstra(node start) {
priorityQueue s;
s.add(start);
while (s.empty() == false) {
top = s.top();
s.pop();
mark top as visited;

check for termination condition (have we reached the target node?)
add all of top's unvisited neighbors to the stack.

}
}
Re: dijskra: is there something missing in the article? (response to post by mafoko) | Reply
I've always thought Dijkstra does the following:

set all L[i] to infinity
set L[start] to zero.
add all vertices to the priority queue.
while the queue is not empty {
top = the item from queue with lowest L[]
remove top from queue
mark top as visited
update all L[]'s for neighbours of top (if L[i]+D[i,j]<L[j] then ...) and update the queue accordingly
}

What's written in the article seems to be strange in the sense that it adds the vertex to the queue for each incoming edge, thus requring more space. Athough it seems to be still working correctly and have the same E*log(V) working time. Instead of keeping track of current minimal distance to the vertex, it keeps minimal distances through each of the incoming edges, but since only the first value (the minimal one) is used, it doesn't do any harm to the correctness.

The reason for that strangeness may be that STL's priority_queue doesn't provide means to update the queue when some priorities change, although it's easy when implementing the heap by yourself.
Re: dijskra: is there something missing in the article? (response to post by Petr) | Reply
got it thanks!

[edit]
actually had a wrong interpetation of how Best First Search worked, thought i works like the following,

dijkstra(S){
visited[S]=1;
for(n=0;n<N;++n) if (g[t][n]&&!visited[n]) pq.push(n);
while(!pq.empty()){
t=pq.top(); pq.pop();
dijkstra(t);
}
}

I just realised that this doesn't transverse the best first.don't know what i was thinking!

[edit1 typos]>
Re: dijskra: is there something missing in the article? (response to post by Petr) | Reply
BTW, the idea of using a fixed sized heap sounds cool!
Re: dijskra: is there something missing in the article? (response to post by Petr) | Reply
I don't think Dijkstra mandated using a heap originally.

His algorithm picked the lowest cost vertex that hasn't been visited yet. Just searching through the adjacent edges would achieve this. A heap improves on this immensely.

I've seen Dijkstra implemented with a heap such that a vertex would be added if it wasn't already on the heap, but "decrease_key" used if it was.
Re: dijskra: is there something missing in the article? (response to post by tolkienfan) | Reply
I think Dijkstra just depends on using a priority queue, which can be implemented as a heap or a partially ordered tree for log(n) access.
Re: dijskra: is there something missing in the article? (response to post by Kawigi) | Reply
There's also an O(N^2) version of dijkstra that just uses an array (or queue).
Re: dijskra: is there something missing in the article? (response to post by eleusive) | Reply
You can look at it as it uses a priority queue with O(1) decrease_key time, and O(n) extract_min time (implemented with an array :))
Re: dijskra: is there something missing in the article? (response to post by tolkienfan) | Reply
Bah, heaps ...

REAL programmers use fibonacci heaps ;-p

(Well, of course not : too complicated to implement and to much overhead for any normal size of a problem. Still an interesting theoretical structure, its "decrease_key" has an amortized constant runtime, contrary to the O(log n) runtime for a normal heap)

BTW, using a heap for djikstra is only interesting for a sparse graphs, otherwise, a normal array is better :
Djikstra has n iteration, and every iteration
- may call decrease_key for each edge of the vertex
- will call extract_min once

At total,
decrease_key might be called for each edge once
extract_min will be called for each vertice

So that gives (with e : # of edges and v : # of vertices)
e * O(decrease_key) + v * O(extract_min)

So with a normal array : e * O(1) + v * O(v) = O(v^2)
With a fixed size heap : e * O(log(v)) + v * O(log(v)) = O(e*log(v))
With a fibonacci heap : e * O(1) + v * O(log(v)) = O(e + v*log(v))

So unless there a many fewer edges than possible, a normal array to keep the edges is better. Which is the definition of a sparse graph.
However, many graphs that model something that exists in reality are sparse. (E.g. there are millions of cities, but every city is only connected to a few douzands by road. There are billions of people but everyone only knows (at most) a few hundred others, etc ...)

And in the arena, I never needed a heap for djikstra.
Re: dijskra: is there something missing in the article? (response to post by mafoko) | Reply
I got time limit in the frog problem of ACM regionals SouthAmerica (it will be soon in UVA) using the priority_queue, i though...... well is (n*log(m)) what the **ck where they asking for?

When i came in the bus frustrated i noticed two things.

First the frog goes to 24 nodes each time, so if i don`t delete the nodes the priority_queue will grow exponentially, and 24^n is more than 2^n so trying to access to a position in log(24^n) makes the heap useless.

So i am impressed how didn`t i took over all the memory of the system and what the **ck was i thinking about.

Sorry my languaje i`m frustrated, i could make 6 problems to get top 2 and frog pissed me off.

But there is hope.... but just in set class, you could use it and save anywhere else info about the distance of a node.... so when you insert a new node with new distance the old node can be deleted if the new node is cheaper.
RSS