AFAIK, there only exists a function that append BinaryHeap to a BinaryHeap, but its implememtation is just append a Vec to BinaryHeap.
If someone really want to append a vec to the BinaryHeap, the conversion from Vec to BinaryHeap cost some time and it is a totally waste since the source code is
#[stable(feature = "binary_heap_append", since = "1.11.0")]
pub fn append(&mut self, other: &mut Self) {
if self.len() < other.len() {
swap(self, other);
}
let start = self.data.len();
self.data.append(&mut other.data);//regard `&mut other.data` as a `Vec`, rather than a `BinaryHeap`.
self.rebuild_tail(start);// assume self.data[0..start] is a vaild heap.
}
things might be better changing the function to
#[stable(feature = "binary_heap_append", since = "1.11.0")]
pub fn append(&mut self, other: &mut Self) {
if self.len() < other.len() {
swap(self, other);
}
self.append_vec(&mut other.data)
}
pub fn append_vec(&mut self, vec: &mut Vec<T>){ // not tested, maybe something simillar like what I've shown below.
// T is defined in `impl <T:Ord> for BinaryHeap`, no need to re-define it.
let start = self.data.len();
self.data.append(vec);
self.rebuild_tail(start);
}
Is it worth a PR or just a Pre-RFC?
If you find it could be an acceptable choice, please help me submit the PR and leave a message here.
Due to some reason, I can hardly connect to github.
I would be appreciate that you help me submit this possible PR.
You’ll usually use the .extend(…) method to append from a different collection. In order to do it with a &mut Vec<T> and keep it’s allocation and capacity intact, you can use Vec::drain. I.e. your proposed heap.append_vec(&mut vector) would behave similar to or the same as heap.extend(vector.drain(..)).
However looks like extending a BinaryHeap with a Drain falls back to just pushing each element, which may be much slower than rebuilding the heap if the Drain contained a lot of elements.
BinaryHeap::append already has heuristics for how to do that; updating extend to do the same seems perfectly reasonable. (Though generally-descending vs unknown-order can affect things slightly.)
Because it's important to let libraries change internal implementation details without breaking compilation. It's unacceptable to say that you can't even rename private fields without a major version bump.
At this point specializing would just enable a more limited version of the swap trick used in append (but just for copying less memory, it wouldn't work to reduce the number of comparisons) and I'm not sure how important that may be.
But we don't have the problem. The BinaryHeap<T>'s document says:
The behavior resulting from such a logic error is not specified, but will not result in undefined behavior. This could include panics, incorrect results, aborts, memory leaks, and non-termination.
Keeping heap ordering after the panic is not needed nor expected.
Doesn't that "panic" refer to what the BinaryHeap could do if you modify an element such that its ordering compared to the other elements changes. Although it's true that currently any BinaryHeap's method doesn't keep the heap ordering if the comparison function panics (and this includes the proposed solution)
The question is not just what belongs to the BinaryHeap.
if self.data.extend(iter) panics (just the iter panics), the heap should keep its order, but in the previous implementation of @steffahnself.rebuild_tail(start); could not be executed.
That is what I want to provide (although I wasn't realized that the compare function could also panic then..)
In Rust, mutable references are generally not considered unwind safe which means that using them anyway after catching a panic is always a logic error (unless the type in question specifies that it isn’t).