Chandler,
The only thing David made clear that wasn’t already clear
is that he believes UB to be “comparatively rare”, which is in agreement
with what Hal already said which is that he does not expect deleting
UB will be of benefit to for example SPEC benchmarks.
Given that it is “comparatively rare”, why all the effort to delete it ?
And why make deleting it the default rather than warning about it ?
There seems to be some confusion/misunderstanding here. My best
understanding is that when David said this:
"The cases where the compiler can statically prove that undefined
behaviour is present are comparatively rare."
What he was referring to/describing was a contrast with the optimizations
described prior to that.
It's something like this:
UB-based optimizations don't prove UB is present - they optimize on the
assumption that it is not present due to some unproven (by the compiler,
but assumed to be known by the developer) invariants in the program.
Think about a simple case like array bounds - the compiler emits an
unconditional load to the memory because it assumes the developer correctly
validated the bounds or otherwise constructed so that out of bounds indexes
never reach that piece of code. This is quite common - that UB is assumed
to not happen, and the compiler optimizes on this fact.
What is less common, is for the compiler to be able to (in reasonable
time) prove that UB /does/ happen (in many cases this would require complex
interprocedural analysis - the array is defined in one function, maybe with
a complex dynamic bound, then passed to another function and indexed using
a non-trivial dynamic expression... statically proving that to be true or
false is complex/expensive and so basically not done by any compiler - so
any cases that are caught by the compiler are relatively trivial ("oh, you
declared a const null pointer value, then dereferenced it within the same
function", etc) & so don't happen very often (because they're also fairly
obvious to developers too))
Does that help explain the difference/distinction being drawn here?
Dave,
perhaps you missed these parts of the discussion
Here is the definition, acknowledged by Hal, of what we’re doing
1. Sometimes there are abstraction penalties in C++ code
2. That can be optimized away after template instantiation, function
inlining, etc
3. When they for example exhibit this pattern
if (A) {
stuff;
} else {
other stuff including “undefined behavior”;
}
4. Where the compiler assumes “undefined behavior” doesn’t actually happen
because
In the C language standard it is the users responsibility to avoid it
5. Therefore in this example the compiler can a) delete the else-clause
b) delete the if-cond, c) assume A is true and propagate that
information
And, here’s the optimization that according to Sean we’re using to delete
UB
[ … ]
In other words, if we can prove "when program statement A executes then
program statement B is guaranteed to execute and have undefined behavior"
then we can assume that program statement A is never executed.
In particular, deleting program statement A is a correct transformation.
Deleting program statement B itself is a special case of this (when A = B).
And yes, this does include everything up to and including `main`,
intraprocedurally and interprocedurally.
[ … ]
-- Sean Silva
This is entirely a separate issue from what Dan Gohman did to optimize
sign extension
of i32 induction variables out of loops for LP64 target machines, where
the optimization
is justified based on “the assumption that UB does not happen”, and no
actual UB
exists either statically or dynamically.
Sorry, the way I phrased this in terms of program statements may have made
this unclear, but this is precisely the particular case A=B that I
mentioned. In this case, A=B="the induction variable increment"