Git Product home page Git Product logo

priority-queue's Introduction

PriorityQueue

crate Build Test

This crate implements a Priority Queue with a function to change the priority of an object. Priority and items are stored in an IndexMap and the queue is implemented as a Heap of indexes.

Please read the API documentation here

Usage

To use this crate, simply add the following string to your Cargo.toml:

priority-queue = "2.0.0"

or use the command cargo add priority-queue

Version numbers follow the semver convention.

Then use the data structure inside your Rust source code as in the following Example.

Remember that, if you need serde support, you should compile using --features serde.

Examples

use priority_queue::PriorityQueue;

fn main() {
    let mut pq = PriorityQueue::new();

    assert!(pq.is_empty());
    pq.push("Apples", 5);
    pq.push("Bananas", 8);
    pq.push("Strawberries", 23);

    assert_eq!(pq.peek(), Some((&"Strawberries", &23)));

    for (item, _) in pq.into_sorted_iter() {
        println!("{}", item);
    }
}

By default, the highest priority element will be extracted first. The order can be easily reversed using the standard wrapper Reverse<T>.

use priority_queue::PriorityQueue;
use std::cmp::Reverse;

fn main() {
    let mut pq = PriorityQueue::new();

    assert!(pq.is_empty());
    pq.push("Apples", Reverse(5));
    pq.push("Bananas", Reverse(8));
    pq.push("Strawberries", Reverse(23));

    assert_eq!(pq.peek(), Some((&"Apples", &Reverse(5))));

    for (item, _) in pq.into_sorted_iter() {
        println!("{}", item);
    }
}

Speeding up

You can use custom BuildHasher for the underlying IndexMap and therefore achieve better performance. For example you can create the queue with the speedy FxHash hasher:

use hashbrown::hash_map::DefaultHashBuilder;

let mut pq = PriorityQueue::<_, _, DefaultHashBuilder>::with_default_hasher();

Attention: FxHash does not offer any protection for dos attacks. This means that some pathological inputs can make the operations on the hashmap O(n^2). Use the standard hasher if you cannot control the inputs.

Benchmarks

Some benchmarks have been run to compare the performances of this priority queue to the standard BinaryHeap, also using the FxHash hasher. On a Ryzen 9 3900X, the benchmarks produced the following results:

test benchmarks::priority_change_on_large_double_queue     ... bench:          25 ns/iter (+/- 1)
test benchmarks::priority_change_on_large_double_queue_fx  ... bench:          21 ns/iter (+/- 1)
test benchmarks::priority_change_on_large_queue            ... bench:          15 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_queue_fx         ... bench:          11 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_queue_std        ... bench:     190,345 ns/iter (+/- 4,976)
test benchmarks::priority_change_on_small_double_queue     ... bench:          26 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_double_queue_fx  ... bench:          20 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue            ... bench:          15 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue_fx         ... bench:          10 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue_std        ... bench:       1,694 ns/iter (+/- 21)
test benchmarks::push_and_pop                              ... bench:          31 ns/iter (+/- 0)
test benchmarks::push_and_pop_double                       ... bench:          31 ns/iter (+/- 0)
test benchmarks::push_and_pop_double_fx                    ... bench:          24 ns/iter (+/- 1)
test benchmarks::push_and_pop_fx                           ... bench:          26 ns/iter (+/- 0)
test benchmarks::push_and_pop_min_on_large_double_queue    ... bench:         101 ns/iter (+/- 2)
test benchmarks::push_and_pop_min_on_large_double_queue_fx ... bench:          98 ns/iter (+/- 0)
test benchmarks::push_and_pop_on_large_double_queue        ... bench:         107 ns/iter (+/- 2)
test benchmarks::push_and_pop_on_large_double_queue_fx     ... bench:         106 ns/iter (+/- 2)
test benchmarks::push_and_pop_on_large_queue               ... bench:          84 ns/iter (+/- 1)
test benchmarks::push_and_pop_on_large_queue_fx            ... bench:          78 ns/iter (+/- 2)
test benchmarks::push_and_pop_on_large_queue_std           ... bench:          71 ns/iter (+/- 1)
test benchmarks::push_and_pop_std                          ... bench:           4 ns/iter (+/- 0)

The priority change on the standard queue was obtained with the following:

pq = pq.drain().map(|Entry(i, p)| {
    if i == 50_000 {
        Entry(i, p/2)
    } else {
        Entry(i, p)
    }
}).collect()

The interpretation of the benchmarks is that the data structures provided by this crate is generally slightly slower than the standard Binary Heap.

On small queues (<10000 elements), the change_priority function, obtained on the standard Binary Heap with the code above, is way slower than the one provided by PriorityQueue and DoublePriorityQueue. With the queue becoming bigger, the operation takes almost the same amount of time on PriorityQueue and DoublePriorityQueue, while it takes more and more time for the standard queue.

It also emerges that the ability to arbitrarily pop the minimum or maximum element comes with a cost, that is visible in all the operations on DoublePriorityQueue, that are slower then the corresponding operations executed on the PriorityQueue.

Contributing

Feel free to contribute to this project with pull requests and/or issues. All contribution should be under a license compatible with the GNU LGPL and with the MPL.

Changes

  • 2.0.2 Fix docs.rs build
  • 2.0.1 Documentation improvements
  • 2.0.0 This release contains breaking changes
    • Some methods now require the trait bound H: BuildHasher. This change will likely have a small impact or none.
    • The standard library support is no longer auto-detected. The feature "std" is included in the default feature set, or else can be enabled like any other Cargo feature. Users that need to support no_std targets will have to disable default features.
  • 1.4.0 Improve shrink_to_fit to also shrink the internal IndexMap (#50)
  • 1.3.2 Bug fix in the log2_fast internal function
  • 1.3.1 Bug fix: #42
  • 1.3.0 Return bool from change_priority_by (Merged #41)
  • 1.2.3 Further performance optimizations (mainly on DoublePriorityQueue)
  • 1.2.2 Performance optimizations
  • 1.2.1 Bug fix: #34
  • 1.2.0 Implement DoublePriorityQueue data structure
  • 1.1.1 Convert documentation to Markdown
  • 1.1.0 Smooth Q: Sized requirement on some methods (fix #32)
  • 1.0.5 Bug fix: #28
  • 1.0.4 Bug fix: #28
  • 1.0.3 Bug fix: #26
  • 1.0.2 Added documentation link to Cargo.toml so the link is shown in the results page of crates.io
  • 1.0.1 Documentation
  • 1.0.0 This release contains breaking changes!
    • From and FromIterator now accept custom hashers -- Breaking: every usage of from and from_iter must specify some type to help the type inference. To use the default hasher (RandomState), often it will be enough to add something like

        let pq: PriorityQueue<_, _> = PriorityQueue::from...

      or you can add a type definition like

        type Pq<I, P> = PriorityQueue<I, P>

      and then use Pq::from() or Pq::from_iter()

    • Support no-std architectures

    • Add a method to remove elements at arbitrary positions

    • Remove take_mut dependency -- Breaking: change_priority_by signature has changed. Now it takes a priority_setter F: FnOnce(&mut P). If you want you can use the unsafe take_mut yourself or also use std::mem::replace

  • 0.7.0 Implement the push_increase and push_decrease convenience methods.
  • 0.6.0 Allow the usage of custom hasher
  • 0.5.4 Prevent panic on extending an empty queue
  • 0.5.3 New implementation of the Default trait avoids the requirement that P: Default
  • 0.5.2 Fix documentation formatting
  • 0.5.1 Add some documentation for iter_mut()
  • 0.5.0 Fix #7 implementing the iter_mut features
  • 0.4.5 Fix #6 for change_priority and change_priority_by
  • 0.4.4 Fix #6
  • 0.4.3 Fix #4 changing the way PriorityQueue serializes. Note that old serialized PriorityQueues may be incompatible with the new version. The API should not be changed instead.
  • 0.4.2 Improved performance using some unsafe code in the implementation.
  • 0.4.1 Support for serde when compiled with --features serde. serde marked as optional and serde-test as dev-dipendency. Now compiling the crate won't download and compile also serde-test, neither serde if not needed.
  • 0.4.0 Support for serde when compiled with cfg(serde)
  • 0.3.1 Fix #3
  • 0.3.0 Implement PartialEq and Eq traits

priority-queue's People

Contributors

garro95 avatar kstrafe avatar lnicola avatar ocecaco avatar petr-tik avatar reima avatar tsionyx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

priority-queue's Issues

Bug in PriorityQueue::remove()

The PriorityQueue::remove() does not work correctly in some cases.

Here is minimal reproducer:

use std::collections::hash_map::RandomState;

use priority_queue::PriorityQueue;

fn main() {
    let mut queue = PriorityQueue::<i32, i32, RandomState>::default();

    for i in 0..7 {
        queue.push(i, i);
    }

    queue.remove(&0);

    let mut last_priority = *queue.peek().unwrap().1;
    while let Some((i, priority)) = queue.pop() {
        dbg!(priority);
        assert!(last_priority >= priority);
        last_priority = priority;
    }
}

This will pop elements in order 6, 5, 3, 4, which is clearly not correct.

I have partially debugged it and I think the map, heap and qp are updated correctly and remain consistent, but the actual algorithm for removing element from heap is not correct. It seems that the remove() function assumes that heapify() will perform up-heapify or down-heapify as needed, but it in fact only performs down-heapify. So in the case when up-heapify should have happened, the "heap" vector is no longer an actual heap.

ahash < 0.8.7 broken on nightly

Per tkaitchuck/aHash#200

The dep chain I see is

ahash v0.7.7
โ””โ”€โ”€ hashbrown v0.12.3
    โ””โ”€โ”€ indexmap v1.9.3
        โ””โ”€โ”€ priority-queue v1.4.0
            โ””โ”€โ”€ ...

ahash through indexmap are using ^0.8 (I think) on their latest releases, but for indexmap that must be 2.x which isn't supported here yet.

For more context, I think I need nightly to do wasm builds (to change panic behavior).

Feature Suggestion: DoublePriorityQueue with multiple priorities

Hey, I'm really appreciating the DoublePriorityQueue: it lets me easily remove entries from one end to avoid overfilling a constant size, kind of like a work-queue LRU cache.

Since the heap is internally just a heap of element indices, I've been pondering having a priority queue with two (or more) unrelated priority measures, stored in two different index heaps. That would add a constant factor to the O(log n) guarantees (based on arbitrary remove), but it'd unlock the ability for me to have two different strategies for the threads selecting work items from the queue. In particular, I'm performing a graph search where every work item has a few variables that I combine unscientifically into a "score", and rather than perfect an equation, I'd like to just use one variable independently as its own measure in one thread (without doing a linear map().min() over the whole heap).

Double license under MPL 2 and LGPL 3

Dear contributors,
Because of how Cargo works, it is currently easier to statically link external crates.

The GNU LGPL license, however, impose to link dynamically.

I am opening this issue to get your consent about double licensing this crate under both the LGPL and the MPL.

The Mozilla Public License is still a weak copyleft license, similar to the LGPL, but allowing static linking to software with a different license.

@lnicola @tsionyx @BourgondAries @ocecaco @petr-tik

DoublePriorityQueue panicked at 'called `Option::unwrap()` on a `None` value'

I have now convinced myself that this is a bug in the library, as I'm commonly hitting this unwrap() that's inside priority_queue code. My use case is a Dijkstra implementation that uses

while let Some((current_node, current_distance)) = queue.pop_min() {

to get the next node out of a DoublePriorityQueue<usize, u32>. For several of my data sets and queue sizes of >~ 1 million elements, I'm hitting a None unwrap here:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', priority-queue-1.2.0\src\double_priority_queue\mod.rs:611:22
stack backtrace:
   0: std::panicking::begin_panic_handler
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\/library\std\src\panicking.rs:495
   1: core::panicking::panic_fmt
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\/library\core\src\panicking.rs:107
   2: core::panicking::panic
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\/library\core\src\panicking.rs:50
   3: enum$<core::option::Option<tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >, 1, 18446744073709551615, Some>::unwrap<tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >  
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\option.rs:746
   4: priority_queue::double_priority_queue::impl$8::heapify_min::closure$1<usize,u32,std::collections::hash::map::RandomState>
             at priority-queue-1.2.0\src\double_priority_queue\mod.rs:607
   5: core::iter::adapters::map::map_fold::closure$0<tuple$<ref$<usize>,ref$<usize> >,tuple$<ref$<usize>,ref$<usize>,ref$<u32> >,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,priority_queue::double_priority_queue::impl$8::heapify_min::closure$1,
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\map.rs:84
   6: core::iter::adapters::filter_map::filter_map_fold::closure$0<ref$<usize>,tuple$<ref$<usize>,ref$<usize> >,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0,core::iter::adapt
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\filter_map.rs:37
   7: core::iter::traits::iterator::Iterator::fold<core::slice::iter::Iter<usize>,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,core::iter::adapters::filter_map::filter_map_fold::closure$0>
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2171
   8: core::iter::adapters::filter_map::impl$2::fold<tuple$<ref$<usize>,ref$<usize> >,core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,core::iter::
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\filter_map.rs:85
   9: core::iter::adapters::map::impl$2::fold<tuple$<ref$<usize>,ref$<usize>,ref$<u32> >,core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,priority_queue::double_pri
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\map.rs:124
  10: core::iter::adapters::map::impl$2::fold<tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\map.rs:124
  11: core::iter::traits::iterator::Iterator::reduce<core::iter::adapters::map::Map<core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2216
  12: core::iter::traits::iterator::Iterator::min_by<core::iter::adapters::map::Map<core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2791
  13: core::iter::traits::iterator::Iterator::min_by_key<core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,priority_queue::double_prio
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2763
  14: priority_queue::double_priority_queue::DoublePriorityQueue<usize,u32,std::collections::hash::map::RandomState>::heapify_min<usize,u32,std::collections::hash::map::RandomState>
             at priority-queue-1.2.0\src\double_priority_queue\mod.rs:596
  15: priority_queue::double_priority_queue::DoublePriorityQueue<usize,u32,std::collections::hash::map::RandomState>::heapify<usize,u32,std::collections::hash::map::RandomState>
             at priority-queue-1.2.0\src\double_priority_queue\mod.rs:585
  16: priority_queue::double_priority_queue::impl$5::pop_min::closure$0<usize,u32,std::collections::hash::map::RandomState>
             at priority-queue-1.2.0\src\double_priority_queue\mod.rs:290
  17: enum$<core::option::Option<usize> >::and_then<usize,tuple$<usize,u32>,priority_queue::double_priority_queue::impl$5::pop_min::closure$0>
             at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\option.rs:1055
  18: priority_queue::double_priority_queue::DoublePriorityQueue<usize,u32,std::collections::hash::map::RandomState>::pop_min<usize,u32,std::collections::hash::map::RandomState>
             at priority-queue-1.2.0\src\double_priority_queue\mod.rs:288
  19: [above caller code]

I'm too unfamiliar with the library internals to track the cause down. However, this is an example of a queue state that seems to cause it:

{
    2: (
        13,
        12183,
    ),
    4: (
        572405,
        12885,
    ),
    5: (
        5146,
        12947,
    ),
    1: (
        572406,
        12473,
    ),
    0: (
        570354,
        12327,
    ),
    12: (
        572397,
        12302,
    ),
    9: (
        569824,
        12186,
    ),
    7: (
        572381,
        12569,
    ),
    10: (
        572342,
        12712,
    ),
    11: (
        119993,
        12350,
    ),
    3: (
        572398,
        12495,
    ),
    6: (
        572382,
        12496,
    ),
    8: (
        572408,
        12675,
    ),
}

Happens on Rust Nightly 2021-10-27 and Rust Stable 1.56.0 x86_64-pc-windows-msvc.

update latest version in Readme

hey, this is a great library - thanks for making it so user-friendly. I especially like the concise way of updating an already existing item's priority with push.

I noticed that the latest version on crates is 0.5.2, but the readme suggests adding

priority_queue = "0.4"

to Cargo.toml. Is it ok, if I send a PR to update it?

Thanks again for the library

Export IndexMap Equivalent trait

IndexMap Equivalent trait allows to have a special implementation of Eq to use inside the hashmap but different from the ==.
IndexMap provide a default implementation of Equivalent for types that implement Eq, that simply apply ==.

[Question] How do I change the priority order?

I'm working with a simulated time (a i32 for now) and my priority is to get the smaller time first and the bigger time last
Is there a way for me that the smaller time get always a bigger priority than a bigger time?

Doesn't work for the custom `Ord` instance

Hello.

I'm using priority-queue = "0.5.1"

I'm wrapping i32 and implementing Ord for it so that the smallest number should be considered as highest priority:

extern crate priority_queue;
use priority_queue::PriorityQueue;
use std::cmp::Ordering;

#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Debug)]
struct Q(i32);

impl Ord for Q {
    fn cmp(&self, other: &Self) -> Ordering {
        self.0.cmp(&other.0).reverse()  // <<<<<<<<< Please, note reverse() here
    }
}

fn main() {

    let mut pq = PriorityQueue::new();
    pq.push("Apples", Q(5));
    pq.push("Bananas", Q(8));
    pq.push("Strawberries", Q(23));

    assert_eq!(pq.peek(), Some((&"Apples", &Q(5))));
}

However "Strawberries" has been obtained:

thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `Some(("Strawberries", Q(23)))`,
 right: `Some(("Apples", Q(5)))`', src/main.rs:44:5

Is it something that I do not expected by the library?
Thanks.

Difference between `push` and `change_priority` unclear

Thanks for this lovely library :-)

From the documentation it seems that both methods take O(log N) time, will update priority of an existing item and insert the item i its not existent, their implementation looks different so I assume they do not do the exact same thing, whats the difference?

Remove Sized bound for some functions

There are some methods with the type parameter Q such that I: Borrow<Q>. Some of these methods (change_priority(_by), get_priority) don't require Q to be sized, but all others do. Has it a particular reason?

If not, it would be nice if those functions could be update to Q: ?Sized.

iter_over(priority) and iter_under(priority)

Similar to #35 , the reasoning is the same. Return an iter with elements above or under a priority.

Consider the following, which is illegal by rust's standards:

while let Some((obj, priority)) = queue.peek_max() {
    if priority > 5 { 
        queue.remove(obj)
   } else {
       break; // All elements in future iterations this will be < 5
   }
}

This is because you cannot mutably modify an object when you have immutable borrows (obj, in this case)

Thus, the equivalent workaround in the same time complexity would be:

let keys = Vec::new();
while let Some((obj, priority)) = queue.peek_max() {
    if priority > 5 {
        keys.add(obj.clone()) 
    } else {
        break;
    }
}
for key in keys {
    queue.remove(obj)
}

But in fact, this would infinitely block since peek_max() is never being changed.

This means the best you can do is O(n) because you are required to iterate over every element.

This could be done by sorting O(logn) and returning a slice over the sorted structure.

Proposed example would be:

while let Some((obj, priority)) = queue.iter_over(5) {
    println!("{}", obj) 
}

changing priorities via `iter_mut` doesn't work as I expected

hello,

Thank you for your work on this library!

Based on the documentation, I had thought that changing the priorities of items in the course of iterating over them via iter_mut would update their priority rankings in the queue. However, unless I am missing something, that does not seem to be the case.

Here is a simple example of what I mean in the form of a test:

#[cfg(test)]
mod tests {
    use std::time::*;

    #[test]
    fn verify_how_priority_queue_works() {
        let mut queue: priority_queue::PriorityQueue<&'static str, Duration> = Default::default();

        queue.push("a", Duration::new(0, 0));
        queue.push("b", Duration::new(0, 1));
        assert_eq!(queue.peek().unwrap().0, &"b");
        for (k, v) in queue.iter_mut() {
            if k == &"a" {
                *v = Duration::new(0, 2);
            }
        }
        assert_eq!(queue.peek().unwrap().0, &"a"); // this assertion fails
    }
}

the peek call instead returns the "b" item.

if this test is modified to use queue.change_priority("a", Duration::new(0, 2)) rather than assigning to the priority as it is above, the return value of peek is "a" as expected.

The docs say:

It's not an error, instead, to modify the priorities, because the heap will be rebuilt once the IterMut goes out of scope.

Based on this, I had thought that changing the value of the priority when iterating over the queue via iter_mut would work the same as change_priority.

Could you tell me if this a bug, or a misunderstanding of how `PriorityQueue works?

what would be a good way to update all of the priorities at once?

thank you for your help on this!

clear_over(priority) and clear_under(priority)

Implement two methods : clear_under() and clear_over() that accept a priority as an argument.

This will essentially truncate the size of the data structure, marking all items over/under this priority as trash/nonexistent.

This would be an O(log(n)) complex operation, as It would take a maximum of log(n) time to sort the structure, and thus, find the upper bound and lower bound of this priority. It would be O(1) to mark the rest of the dataset as trash. (e.g. an iter would return a slice of the min element to this new priority)

This would be a great convenience as it would give this data structure the ability to greatly speed up pruning operations. Currently pruning is O(n) at best, as you can only remove one element at a time. (It is possible to prune the entire set in O(1) using clear but that is besides my point)

Consider changing license to MIT or Apache 2.0

From the rust API guidelines:

The software produced by the Rust project is dual-licensed, under either the 
MIT or Apache 2.0 licenses.  Crates that simply need the maximum 
compatibility with the Rust ecosystem are recommended to do the same, in 
the manner described herein...

I don't know if you ever plan on submitting priority-queue to become part of the standard library, but the license could become a stumbling block, especially if you get other contributors (at the moment it appears that no-one else has contributed, so you own the copyright and can change the license at will).

Implement a DoublePriorityQueue

Hey, I am having a similar issue but I would like to be able to pop the items with the highest or lowest priority from the queue. AFAIK, currently, you can create a PriorityQueue that can do only one of the two but not both at the same time. Maybe it's worth implementing a pop_back() as well?

Originally posted by @imgeorgiev in #30 (comment)

increase-key API

Hi! Your crate and its API are really nice and I enjoyed using it for AoC today, but I lost a lot of time because I misunderstood how the push function works when the element is already present in the queue. Of course, it does what it says on the tin, which was almost but not quite what I needed.

An increase_key method would be really useful for some algorithms. push overwrites existing nodes even if the new priority is lower. change_priority_by can be used to emulate it, but it's not obvious:

let mut found = false;
frontier.change_priority_by(&node, |prev_dist| {
    found = true;
    // poorly named because it's a max-heap, please ignore that
    dist.max(prev_dist)
});
if !found {
    frontier.push(node, dist);
}

May I suggest adding something like a increase_priority_or_push method that works the same way?

PS: Sorry if this is already possible and I missed it.

Functional incorrectness: criterion used for item identity is hash value instead `Eq`-identity

A very surprising defect. PriorityQueue ignores the Eq-implementation of the item type. Whether or not an item is already in the queue should not be determined by an implementation detail leaked from the underlying data structure, namely its hash value. Item identity should be determined by its Eq-identity.

This defect also decreases performance in cases where comparing identities costs less than comparing hash values, which I presume is always.

remove elements at arbitrary positions

I know that this might be out of scope for a traditional priority queue, but what do you think of supporting removing elements at arbitrary positions?

I'd like to track cached chunks in a priority queue (least priority -> first to be dropped when a different chunk is no longer needed and moved to the cache), and when a chunk is moved into the "real" data structure it needs to be removed from the cache priority queue to prevent it from being removed too early.

Feature request: `drain_filter` method

This method should take an FnMut(T) -> bool predicate, and then iterate over items in the priority queue in sorted order, popping and returning those items for which the predicate returns true.

Inspired by Vec::drain_filter(), which is currently marked unstable.

Since it iterates over all elements of the queue, it would obviously be O(n) complexity.

Dropping the iterator object should stop the iteration and preserve the remaining elements of the queue.

An equivalent workaround would be popping all elements of the queue, pushing them back if they don't match the predicate ยญโ€” however, I feel like this is prone to being incorrectly implemented (one could accidentally end up in an infinite loop) and/or being inefficient in general (to not end up in a loop, one may push back the unmatched items after the fact โ€” but then collecting those items somewhere, say, in a Vec, is a waste of memory, while creating a second queue to push items back and swapping between queues might not be ergonomic).

Add a pop_lowest_priority API

It would be helpful to add a pop API that extracts the lowest priority from the queue and not only the greatest one (as documented here)

Use complex data structure as Priority

I have an use case where I want to track an estimated rate as the priority for items tracked in PriorityQueue. Lets assume this structure looks as follows.

pub struct Rate {
  estimated_rate: f64,
  last_update_time: f64
}

Following code snippet summarizes my problem.
// assume we have functions that update the estimated rate based on the last updated time and current time. 
let pq = PriorityQueue::<String, Rate>::new();

let rate = Rate::new();
let curr_time = 123.0;
rate.record(100.0, curr_time);
pq.insert("key1".to_string(), rate);

// now if i want to update the priority of this item
let entry = pq.get_mut("key1".to_string());
// now the Priority returned is a non mutable reference. 
// which means I can't call `entry.1.record(200.0, curr_time);`

How can i best model this with the PriorityQueue data structure?

Batch-changing priorities

I find myself in a use case in which I need to change many (possibly all) priorities at once. So, maybe a

fn change_all_priorities<F>(&mut self, priority_setter: F) where F: FnMut(&I, P) -> P {
   ...
}

Add option for duplicate/unhashable Items

Sometimes you want map values which are unhashable/incomparable, and sometimes you want to allow duplicates. For a project I was working on (since refactored), I would need to change the values in a way that altered there hashes.

All of this can be made to work with a struct like

#[derive(Debug)]
pub struct Node<T> {
    nonce: u64,
    pub value: T,
}

where the implementation ensures that nonce is always unique (a simple counter would do along with a manual implementation of Clone) and only nonce participates in hashing and equality (which is why I donสผt derive Hash, PartialEq, or even Eq (because the derivable one requires T: Eq even though this isnสผt needed)); they can be implemented manually or with the derivative crate).

The harder part is making an ergonomic UI to handle this, and I donสผt have enough of a handle on ergonomic Rust to design one at the moment (which is why this is an issue not a PR).

Bug in DoublePriorityQueue.peek_min()

After thounds of loops using change_priority() functions, I met with result of peek_min() function not returning the min priority item.

Has anyone come across this situation?

serde failing with non-primitive items

I've been testing priority-queue for use in a home grown simulator framework. One of the things I need is for the queue and its contents to be serializeable via serde. When I test priority-queue using primitive types for the items, this works, but if I test using non-primitive types for the items, I get a the error string thread 'main' panicked at 'called Result::unwrap() on an Err value: ErrorImpl { code: KeyMustBeAString, line: 0, column: 0 }', src/libcore/result.rs:906:4. The following code should illustrate the problem; look in the area under the FAILING block.

#[macro_use]
extern crate serde_derive;

extern crate serde;
extern crate serde_json;
extern crate uuid;
extern crate priority_queue;

use priority_queue::PriorityQueue;
use std::cmp::{Ord, PartialOrd, Ordering};
use std::default::Default;
use std::time::Duration;
use uuid::Uuid;

// Abusing Duration as a mutable std::time::Instant
type ActivationDate = Duration;

/// Events are compared by EventComparables instances.
///
/// EventComparables instances are similar to instances of time, but with the
/// extra wrinkle of having a Uuid instance.  When EventComparables instances
/// are compared, they are first compared by their activation date, with the
/// date that occurs earlier being greater than a date that occurs later. This
/// ordering exists because of how priority_queue::PriorityQueue works; it is
/// naturally a max priority queue; using this ordering makes it a min
/// priority queue. EventComparables go one step beyond using time as the key
/// though; they  also have uuid::Uuid instances which are used to guarantee
/// that every EventComparables is unique.  This ensures that if a set of
/// events all  occur at the same time, they will still be executed in a
/// deterministic order, every single time the queue's contents are executed.
/// This is  critical for deterministic simulators.
#[serde(default)]
#[serde(deny_unknown_fields)]
#[derive(Copy, Clone, Eq, PartialEq, Hash, Serialize, Deserialize, Debug)]
struct EventComparables {
    /// This is when the event will be fired.
    activation_date: ActivationDate,

    /// This is a unique ID.  Except for ensuring that different events are
    /// guaranteed to compare as being different, it has no purpose.
    id: Uuid
}

/// Default events activate at time (0, 0)
///
/// All default events first at time (0, 0), but every single one has a unique
/// id.  This ensures that regardless of the number of default events you
/// create, they will always execute in the same order.
impl Default for EventComparables {
    fn default() -> Self {
        EventComparables {activation_date: ActivationDate::new(0,0), id: Uuid::new_v4()}
    }
}

/// The priority queue depends on `Ord`. Explicitly implement the trait so the
/// queue becomes a min-heap instead of a max-heap.
impl Ord for EventComparables {
    fn cmp(&self, other: &Self) -> Ordering {

        // Notice that the we flip the ordering on costs. In case of a tie we
        // compare by id - this step is necessary to make implementations of
        // `PartialEq` and `Ord` consistent.

        other.activation_date.cmp(&self.activation_date)
            .then_with(|| self.id.cmp(&other.id))
    }
}

// `PartialOrd` needs to be implemented as well.
impl PartialOrd for EventComparables {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

/// A fake event to fire on some date.
///
/// This is a fake event that I'll fire when the corresponding
/// EventComparables instance comes up.  The contents are immaterial; I'm just
/// using it for testing
#[serde(default)]
#[serde(deny_unknown_fields)]
#[derive(Copy, Clone, Eq, PartialEq, Hash, Serialize, Deserialize, Debug)]
struct ConcreteEvent1 {
    a: i32,
    b: i64
}

impl Default for ConcreteEvent1 {
    fn default() -> Self {
        ConcreteEvent1 {a: 0, b: 0}
    }
}

//////////////////////////////////////////////////////////////////////////////
// Test 1
//////////////////////////////////////////////////////////////////////////////

fn test1() {
    println!("test1()");

    type PqType = PriorityQueue<i32, i32>;

    let mut pq: PqType = PriorityQueue::new();
    pq.push(0, 0);
    pq.push(1, 1);

    let serialized = serde_json::to_string(&pq).unwrap();
    println!("serialized = {:?}", serialized);
    let deserialized: PqType = serde_json::from_str(&serialized).unwrap();
    println!("deserialized = {:?}", deserialized);
}

//////////////////////////////////////////////////////////////////////////////
// Test 2
//////////////////////////////////////////////////////////////////////////////

fn test2() {
    println!("\n\ntest2()");

    type PqType = PriorityQueue<i32, EventComparables>;

    let mut pq: PqType = PriorityQueue::new();
    pq.push(0, Default::default()); // Uuids will be different
    pq.push(1, Default::default());

    let serialized = serde_json::to_string(&pq).unwrap();
    println!("serialized = {:?}", serialized);
    let deserialized: PqType = serde_json::from_str(&serialized).unwrap();
    println!("deserialized = {:?}", deserialized);
}

//////////////////////////////////////////////////////////////////////////////
// Test 3
//////////////////////////////////////////////////////////////////////////////

fn test3() {
    println!("\n\ntest3()");

    // Create some concrete events and comparables, and test to see that they
    // can be serialized/deserialized

    let ce1 = ConcreteEvent1{a: 12, b: 45};
    let ec1 = EventComparables {activation_date: ActivationDate::new(0, 1), id: Uuid::new_v4()};

    let ce2 = ConcreteEvent1{a: 37, b: 123};
    let ec2 = EventComparables {activation_date: ActivationDate::new(0, 1), id: Uuid::new_v4()};

    let serialized = serde_json::to_string(&ce1).unwrap();
    println!("serialized = {:?}", serialized);
    let deserialized: ConcreteEvent1 = serde_json::from_str(&serialized).unwrap();
    println!("deserialized = {:?}", deserialized);
    assert_eq!(ce1, deserialized);

    let serialized = serde_json::to_string(&ec1).unwrap();
    println!("serialized = {:?}", serialized);
    let deserialized: EventComparables = serde_json::from_str(&serialized).unwrap();
    println!("deserialized = {:?}", deserialized);
    assert_eq!(ec1, deserialized);

    let serialized = serde_json::to_string(&ce2).unwrap();
    println!("serialized = {:?}", serialized);
    let deserialized: ConcreteEvent1 = serde_json::from_str(&serialized).unwrap();
    println!("deserialized = {:?}", deserialized);
    assert_eq!(ce2, deserialized);

    let serialized = serde_json::to_string(&ec2).unwrap();
    println!("serialized = {:?}", serialized);
    let deserialized: EventComparables = serde_json::from_str(&serialized).unwrap();
    println!("deserialized = {:?}", deserialized);
    assert_eq!(ec2, deserialized);

/****************************************************************************
 ****************************************************************************
 ****************************************************************************

           ########    ###    #### ##       #### ##    ##  ######
           ##         ## ##    ##  ##        ##  ###   ## ##    ##
           ##        ##   ##   ##  ##        ##  ####  ## ##
           ######   ##     ##  ##  ##        ##  ## ## ## ##   ####
           ##       #########  ##  ##        ##  ##  #### ##    ##
           ##       ##     ##  ##  ##        ##  ##   ### ##    ##
           ##       ##     ## #### ######## #### ##    ##  ######

 ****************************************************************************
 ****************************************************************************
 ****************************************************************************/

    if true {
        type PqType = PriorityQueue<ConcreteEvent1, EventComparables>;

        let mut pq: PqType = PriorityQueue::new();
        pq.push(ce1, ec1);
        pq.push(ce2, ec2);

        let serialized = serde_json::to_string(&pq).unwrap();
        println!("serialized = {:?}", serialized);
        let deserialized: PqType = serde_json::from_str(&serialized).unwrap();
        println!("deserialized = {:?}", deserialized);
    }
}

//////////////////////////////////////////////////////////////////////////////
// main()
//////////////////////////////////////////////////////////////////////////////

fn main() {
    test1();
    test2();
    test3();
}

"error: could not compile 'priority-queue'" on nightly rust build

Failed on

cargo +nightly --version
cargo 1.70.0-nightly (4a3c588b1 2023-03-14)

Succeeds on

cargo --version
cargo 1.68.0 (115f34552 2023-02-26)

Toml

[dependencies]
priority-queue = "1.3.1"

Full stack trace:

   Compiling autocfg v1.1.0
   Compiling hashbrown v0.12.3
   Compiling indexmap v1.9.2
   Compiling priority-queue v1.3.1
error: internal compiler error: /rustc/ab654863c3d50482f260cf862647f1fe0ff5e010/compiler/rustc_infer/src/infer/outlives/env.rs:145:26: add_outlives_bounds: unexpected regions

thread 'rustc' panicked at 'Box<dyn Any>', /rustc/ab654863c3d50482f260cf862647f1fe0ff5e010/compiler/rustc_errors/src/lib.rs:1644:9
stack backtrace:
   0:        0x1008c85dc - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hb05446754748fc5b
   1:        0x10091b3d4 - core::fmt::write::hc4e7e367f0127987
   2:        0x1008be224 - std::io::Write::write_fmt::h97f8cfc5288a8cc9
   3:        0x1008c83f0 - std::sys_common::backtrace::print::hc3e3719568510aef
   4:        0x1008cae78 - std::panicking::default_hook::{{closure}}::h722f6a6f2478bc6f
   5:        0x1008cabd0 - std::panicking::default_hook::he6f5a145cc5d36de
   6:        0x108a22fd8 - rustc_driver_impl[630fbe36553cc500]::DEFAULT_HOOK::{closure#0}::{closure#0}
   7:        0x1008cb570 - std::panicking::rust_panic_with_hook::h1f3530038ad6b26c
   8:        0x10c6f5400 - std[22480f07b972d5ac]::panicking::begin_panic::<rustc_errors[8b68f8991908f601]::ExplicitBug>::{closure#0}
   9:        0x10c6f45a0 - std[22480f07b972d5ac]::sys_common::backtrace::__rust_end_short_backtrace::<std[22480f07b972d5ac]::panicking::begin_panic<rustc_errors[8b68f8991908f601]::ExplicitBug>::{closure#0}, !>
  10:        0x10cccf210 - std[22480f07b972d5ac]::panicking::begin_panic::<rustc_errors[8b68f8991908f601]::ExplicitBug>
  11:        0x10c79fb6c - std[22480f07b972d5ac]::panic::panic_any::<rustc_errors[8b68f8991908f601]::ExplicitBug>
  12:        0x10c79d808 - <rustc_errors[8b68f8991908f601]::HandlerInner>::bug::<&alloc[a89ba42cafc32c62]::string::String>
  13:        0x10c79d468 - <rustc_errors[8b68f8991908f601]::Handler>::bug::<&alloc[a89ba42cafc32c62]::string::String>
  14:        0x10c782bd8 - rustc_middle[bc5773c6b9e81b3f]::util::bug::opt_span_bug_fmt::<rustc_span[5b11aacf1cbc7475]::span_encoding::Span>::{closure#0}
  15:        0x10c77e970 - rustc_middle[bc5773c6b9e81b3f]::ty::context::tls::with_opt::<rustc_middle[bc5773c6b9e81b3f]::util::bug::opt_span_bug_fmt<rustc_span[5b11aacf1cbc7475]::span_encoding::Span>::{closure#0}, !>::{closure#0}
  16:        0x10c77e93c - rustc_middle[bc5773c6b9e81b3f]::ty::context::tls::with_context_opt::<rustc_middle[bc5773c6b9e81b3f]::ty::context::tls::with_opt<rustc_middle[bc5773c6b9e81b3f]::util::bug::opt_span_bug_fmt<rustc_span[5b11aacf1cbc7475]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
  17:        0x10c782b48 - rustc_middle[bc5773c6b9e81b3f]::util::bug::opt_span_bug_fmt::<rustc_span[5b11aacf1cbc7475]::span_encoding::Span>
  18:        0x10ccd15a4 - rustc_middle[bc5773c6b9e81b3f]::util::bug::bug_fmt
  19:        0x10aeeacb4 - <rustc_infer[841bee24244470c]::infer::outlives::env::OutlivesEnvironment>::with_bounds::<core[3a8cfdcf785a1bad]::iter::adapters::flatten::Flatten<core[3a8cfdcf785a1bad]::iter::adapters::map::Map<indexmap[96e5a7ae5db70d66]::set::IntoIter<rustc_middle[bc5773c6b9e81b3f]::ty::Ty>, <rustc_infer[841bee24244470c]::infer::InferCtxt as rustc_trait_selection[20f6487b014fd0d6]::traits::outlives_bounds::InferCtxtExt>::implied_bounds_tys::{closure#0}>>>
  20:        0x10aebe5fc - rustc_hir_analysis[d0da5362ade2d80a]::check::compare_impl_item::compare_method_predicate_entailment
  21:        0x10aeb5464 - rustc_hir_analysis[d0da5362ade2d80a]::check::compare_impl_item::compare_impl_method
  22:        0x10ae4a270 - rustc_hir_analysis[d0da5362ade2d80a]::check::check::check_impl_items_against_trait
  23:        0x10ae45348 - rustc_hir_analysis[d0da5362ade2d80a]::check::check::check_item_type
  24:        0x10ae4efb8 - rustc_hir_analysis[d0da5362ade2d80a]::check::check::check_mod_item_types
  25:        0x10ba4cb78 - rustc_query_system[726ed3e9f9824747]::query::plumbing::try_execute_query::<rustc_query_impl[7243f20952ea59a4]::queries::check_mod_item_types, rustc_query_impl[7243f20952ea59a4]::plumbing::QueryCtxt>
  26:        0x10bb85b20 - <rustc_query_impl[7243f20952ea59a4]::Queries as rustc_middle[bc5773c6b9e81b3f]::ty::query::QueryEngine>::check_mod_item_types
  27:        0x10ae8de6c - <rustc_middle[bc5773c6b9e81b3f]::hir::map::Map>::for_each_module::<rustc_hir_analysis[d0da5362ade2d80a]::check_crate::{closure#6}::{closure#0}>
  28:        0x10aeaa404 - <rustc_session[ee16895196d31674]::session::Session>::time::<(), rustc_hir_analysis[d0da5362ade2d80a]::check_crate::{closure#6}>
  29:        0x10af0796c - rustc_hir_analysis[d0da5362ade2d80a]::check_crate
  30:        0x108b320ac - rustc_interface[f60559f741f10411]::passes::analysis
  31:        0x10bac9400 - rustc_query_system[726ed3e9f9824747]::query::plumbing::try_execute_query::<rustc_query_impl[7243f20952ea59a4]::queries::analysis, rustc_query_impl[7243f20952ea59a4]::plumbing::QueryCtxt>
  32:        0x10bb7d1d4 - <rustc_query_impl[7243f20952ea59a4]::Queries as rustc_middle[bc5773c6b9e81b3f]::ty::query::QueryEngine>::analysis
  33:        0x108a6fb68 - <rustc_middle[bc5773c6b9e81b3f]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[630fbe36553cc500]::run_compiler::{closure#1}::{closure#2}::{closure#4}, core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>>
  34:        0x108a6f54c - <rustc_interface[f60559f741f10411]::interface::Compiler>::enter::<rustc_driver_impl[630fbe36553cc500]::run_compiler::{closure#1}::{closure#2}, core[3a8cfdcf785a1bad]::result::Result<core[3a8cfdcf785a1bad]::option::Option<rustc_interface[f60559f741f10411]::queries::Linker>, rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>>
  35:        0x108a6a330 - rustc_span[5b11aacf1cbc7475]::with_source_map::<core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>, rustc_interface[f60559f741f10411]::interface::run_compiler<core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>, rustc_driver_impl[630fbe36553cc500]::run_compiler::{closure#1}>::{closure#0}::{closure#0}>
  36:        0x108a32c9c - std[22480f07b972d5ac]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[f60559f741f10411]::util::run_in_thread_pool_with_globals<rustc_interface[f60559f741f10411]::interface::run_compiler<core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>, rustc_driver_impl[630fbe36553cc500]::run_compiler::{closure#1}>::{closure#0}, core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>>
  37:        0x108a33b3c - <<std[22480f07b972d5ac]::thread::Builder>::spawn_unchecked_<rustc_interface[f60559f741f10411]::util::run_in_thread_pool_with_globals<rustc_interface[f60559f741f10411]::interface::run_compiler<core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>, rustc_driver_impl[630fbe36553cc500]::run_compiler::{closure#1}>::{closure#0}, core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[3a8cfdcf785a1bad]::result::Result<(), rustc_span[5b11aacf1cbc7475]::ErrorGuaranteed>>::{closure#1} as core[3a8cfdcf785a1bad]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
  38:        0x1008d3b98 - std::sys::unix::thread::Thread::new::thread_start::hc3f867787b50646f
  39:        0x1a05fa06c - __pthread_deallocate

note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md

note: rustc 1.70.0-nightly (ab654863c 2023-03-15) running on aarch64-apple-darwin

note: compiler flags: --crate-type lib -C embed-bitcode=no -C split-debuginfo=unpacked -C debuginfo=2

note: some of the compiler flags provided by cargo are hidden

query stack during panic:
#0 [check_mod_item_types] checking item types in module `double_priority_queue::iterators`
#1 [analysis] running analysis passes on this crate
end of query stack
error: could not compile `priority-queue` (lib)```

Re-pushing with the highest priority does not always put the item in the top

The following example re-pushes "d" with the highest priority among all already pushed items. We expect to see "d" as the first time in .pop(), but we see "a" instead, and "d" comes second.

extern crate priority_queue;

use priority_queue::PriorityQueue;

fn main() {
    let mut pq : PriorityQueue<&'static str, usize> = PriorityQueue::new();

    pq.push("a", 9);
    pq.push("b", 8);
    pq.push("c", 7);
    pq.push("d", 6);

    // Re-pushing with the highest priority
    pq.push("d", 20);

    let mut prev_opt = None;
    for (item, x) in pq.into_sorted_iter() {
        println!("{:?} {:?}", item, x);
        if let Some(prev) = prev_opt {
            if prev < x {
                println!("WTF");
            }
        }
        prev_opt = Some(x);
    }
}

Output:

"a" 9
"d" 20
WTF
"b" 8
"c" 7

Tested along with:

[dependencies]
priority-queue = { git = "https://github.com/garro95/priority-queue" }

Float as Priority

I'm writing the Dijkstra's Shortest Path Algorithm which uses the priority queue. Some distances within the graph, which are represented by f32 or f64, are needed to be used as the priority. However, this is not allowed because floats implement only PartialOrd. Is there any workarounds to this?

Function to remove item with least priority

Something like PriorityQueue::pop() but it removes the item with the least priority instead of the most priority. I was looking at the code and thought this could be accomplished by using VecDeque instead of Vec (since VecDeque allows for popping from both sides), but I don't know enough to know where to even start implementing it.
Perhaps creating a new type called PriorityDeque would be a good idea.
For the time being, I will just set my priorities to usize::MAX - priority to take advantage of the .pop() function and then do it back so I can use my initial priority values.

Documentation does not specify behavior for equal priorities

How will the items be sorted and popped, if their priority is equal? Is it insertion order? Is it arbitrary order? Is the order deterministic/repeatable? Does it depend on hasher? It would be nice if the documentation specified what minimum guarantee I can assume. At the moment I must assume the worst case scenario of completely random order, since the documentation doesn't indicate otherwise.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.