Skip to content

gh-132732: Automatically constant evaluate pure operations #132733

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 54 commits into
base: main
Choose a base branch
from

Conversation

Fidget-Spinner
Copy link
Member

@Fidget-Spinner Fidget-Spinner commented Apr 19, 2025

@python-cla-bot
Copy link

python-cla-bot bot commented Apr 19, 2025

All commit authors signed the Contributor License Agreement.

CLA signed

Copy link
Member

@brandtbucher brandtbucher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really neat!

Other than two opcodes I found that shouldn't be marked pure, I just have one thought:

Rather than rewriting the bodies like this to use the symbols-manipulating functions (which seems error-prone), would we be able to just use stackrefs to do this?

For example, _BINARY_OP_ADD_INT is defined like this:

PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
// ...
res = PyStackRef_FromPyObjectSteal(res_o);

Rather than rewriting uses of these functions, could it be easier to just do something like this, since we're guranteed not to escape?

if (sym_is_const(ctx, stack_pointer[-2]) && sym_is_const(ctx, stack_pointer[-1])) {
    // Generated code to turn constant symbols into stackrefs:
    _PyStackRef left = PyStackRef_FromPyObjectBorrow(sym_get_const(ctx, stack_pointer[-2]));
    _PyStackRef right = PyStackRef_FromPyObjectBorrow(sym_get_const(ctx, stack_pointer[-1]));
    _PyStackRef res;
    // Now the actual body, same as it appears in executor_cases.c.h:
    PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
    PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
    // ...
    res = PyStackRef_FromPyObjectSteal(res_o);
    // Generated code to turn stackrefs into constant symbols:
    stack_pointer[-1] = sym_new_const(ctx, PyStackRef_AsPyObjectSteal(res));
}

I'm not too familiar with the design of the cases generator though, so maybe this is way harder or something. Either way, I'm excited to see this get in!

@Fidget-Spinner
Copy link
Member Author

Rather than rewriting uses of these functions, could it be easier to just do something like this, since we're guranteed not to escape?

Seems feasible. I could try to rewrite all occurences of the variable with a stackref-producing const one. Let me try that.

@Fidget-Spinner
Copy link
Member Author

I've verified no refleak on test_capi.test_opt locally apart from #132731 which is pre-existing.

@markshannon
Copy link
Member

There's a lot going on in this PR, probably too much for one PR.

Could we start with a PR to fix up the pure annotations so that they are on the correct instructions and maybe add the pure_guard annotation that Brandt suggested?

@markshannon
Copy link
Member

Could we have the default code generator generate a function for the body of the pure instruction and then call that from the three interpreters?

@brandtbucher
Copy link
Member

Could we have the default code generator generate a function for the body of the pure instruction and then call that from the three interpreters?

Hm, I think I’d prefer not to. Sounds like it could hurt performance, especially for the JIT (where things can’t inline).

@brandtbucher
Copy link
Member

I think a good progression would be:

  • Implement the pure attribute, and the optimizer changes. Remove the pure attributes where they don’t belong (so nothing breaks) and leave the existing ones as proof that the implementation works. (This PR)
  • Audit the existing non-pure bytecodes and add pure where it makes sense. (Follow-up PR)
  • Implement the pure_guard attribute, and annotate any bytecodes that can use it. (Follow-up PR)

@Fidget-Spinner
Copy link
Member Author

Could we have the default code generator generate a function for the body of the pure instruction and then call that from the three interpreters?

Hm, I think I’d prefer not to. Sounds like it could hurt performance, especially for the JIT (where things can’t inline).

I thought about this and I think we can inline if we autogenerate a header file and include that directly. But then we're at the mercy of the compiler in both the normal interpreter and the JIT deciding to inline or not to inline the body again. Which I truly do not want.

@Fidget-Spinner
Copy link
Member Author

@brandtbucher @markshannon what can I do to get this PR moving?

@tomasr8 if youd like to review, here's a summary of the PR:

  1. If a bytecode operation is pure (no side effects) we can mark it as pure in bytecodes.c.
  2. In the optimizer, we automatically generate the body that does evaluation of the symbolic constants by copy pasting the bytecodes.c definition into the optimizer's C code. Of course we check that the inputs are constants first.
  3. All changes to the cases generator is for the second point.

@tomasr8
Copy link
Member

tomasr8 commented May 8, 2025

Thanks for the ping! I actually wanted to try/review this PR, I was just very busy this week with work :/ I'll have a look this weekend :)

Copy link
Member

@tomasr8 tomasr8 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only had time to skim the PR, I'll do a more thorough review this weekend :)

@Fidget-Spinner
Copy link
Member Author

I've added the required functions to the allowlist, so I removed the is_abstract workaround.

@Fidget-Spinner
Copy link
Member Author

It seems like the issue of escaping is causing problems here. For the code generator, "escaping" means "able to run the GC", which shouldn't happen in the abstract interpreter. So either, we are not correctly marking functions as non-escaping, or we are calling functions that do escape (which we shouldn't).

In the example you give, _PyLong_Multiply(sym_get_const(x), sym_get_const(y)) neither _PyLong_Multiply nor sym_get_const escape. They just need to be added to the whitelist.

@markshannon it seems this is wrong. _PyLong_Multiply/Add/Subtract can trigger the GC. The failing tests currently are evidence of that. The problem is with the SIGCHECK macro in longobject.c. https://github.com/python/cpython/blob/main/Objects/longobject.c#L114

@Fidget-Spinner
Copy link
Member Author

I have a fix in a separate PR for the longobject GC issues.

@markshannon
Copy link
Member

I think we only specialize for, and are interested in compact ints (or tagged ints in the future), so maybe replace _PyLong_Add with _PyCompactLong_Add?
It would help with much the same issue I'm having with excessive escapes in TOS caching.

@Fidget-Spinner
Copy link
Member Author

For now, I'm avoiding changing the int operations in this PR. I will add back constant evaluation for them in the future once we fix this in either bytecodes.c or the long object.

@Fidget-Spinner
Copy link
Member Author

I've implemented code for deopt_if, error_if and addressed all review comments. Is there anything left?

Copy link
Member

@markshannon markshannon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a maintenance perspective, I'm still not happy about this approach, due to the amount (and complexity) of code this adds to the code generators. It also adds a lot of bulk to the generated code as all the BINARY_OP... variants get a copy of the code in bytecodes.c as well as the code in optimizer_bytecodes.c (although that's much less an issue than the maintenance one).

How about specifying the function to do the evaluation in the macro?
In other words, instead of REPLACE_OPCODE_IF_EVALUATES_PURE(left, right) we would write REPLACE_OPCODE_IF_EVALUATES_PURE(left, right, PyNumber_Add).
Then we wouldn't need to parse the original opcode, just wrap the call to PyNumber_Add

@Fidget-Spinner
Copy link
Member Author

How about specifying the function to do the evaluation in the macro? In other words, instead of REPLACE_OPCODE_IF_EVALUATES_PURE(left, right) we would write REPLACE_OPCODE_IF_EVALUATES_PURE(left, right, PyNumber_Add). Then we wouldn't need to parse the original opcode, just wrap the call to PyNumber_Add

That sadly wouldn't work for the next thing we plan to add this to: _COMPARE_OP_X, because that has more than one function-like thing.

From a maintenance perspective, I'm still not happy about this approach, due to the amount (and complexity) of code this adds to the code generators.

This has already paid for itself from a maintenance perspective:

  1. We've caught 3 bugs! (one in each BINARY_OP_X_INT), where we previously constant evaluated them in the JIT even though they were escaping.
  2. We cut down the massive amount of code needed to do constant evaluation in the JIT.

If you're worried about the bulk of the generated code, I can open an issue for someone to make a follow up PR to generate function templates from the bytecodes.c file, similar to what we already do for the tail calling interpreter. This would allow us to just call the function. That should go into a follow-up commit though, because it requires changes that affect more than just the optimizer (it will also touch executor_cases and such).

@markshannon
Copy link
Member

That sadly wouldn't work for the next thing we plan to add this to: _COMPARE_OP_X ...

REPLACE_OPCODE_IF_EVALUATES_PURE(left, right, compare_ops[oparg>>5]) should work. Just make a little table of comparison functions.

@markshannon
Copy link
Member

Does this work for BINARY_OP? I note that you haven't included it.

@Fidget-Spinner
Copy link
Member Author

That sadly wouldn't work for the next thing we plan to add this to: _COMPARE_OP_X ...

REPLACE_OPCODE_IF_EVALUATES_PURE(left, right, compare_ops[oparg>>5]) should work. Just make a little table of comparison functions.

Doesn't that defeat the purpose of this DSL addition, because we'd have to modify bytecodes.c just to get optimizer_bytecodes.c to optimize? The whole point is to not require the user to modify bytecodes.c or copy anything from there.

@Fidget-Spinner
Copy link
Member Author

Does this work for BINARY_OP? I note that you haven't included it.

It should, let me try

@markshannon
Copy link
Member

That sadly wouldn't work for the next thing we plan to add this to: _COMPARE_OP_X ...

REPLACE_OPCODE_IF_EVALUATES_PURE(left, right, compare_ops[oparg>>5]) should work. Just make a little table of comparison functions.

Doesn't that defeat the purpose of this DSL addition, because we'd have to modify bytecodes.c just to get optimizer_bytecodes.c to optimize? The whole point is to not require the user to modify bytecodes.c or copy anything from there.

No need to modify bytecodes.c just put the helpers in optimizer_analysis.c. They can be static.

I know that doing this all automatically seems purer and more elegant, and it probably is, but the code generator is a bit of a pain point in terms of maintenance. If we can keep it simple with a bit of extra work elsewhere it is usually worth it.

One other thing. Ultimately we will want to move this functionality out of the optimizer_bytecodes.c into the mythical later partial evaluation pass. So keeping things simple should help with that.

@markshannon
Copy link
Member

In case it wasn't clear. The helper functions would be:
PyObject *compare_op_0(PyObject *a, PyObject *b) { return PyObject_RichCompare(a, b, 0) }
and so on.

@python python deleted a comment from Rk13-gold Jun 20, 2025
@Fidget-Spinner
Copy link
Member Author

In case it wasn't clear. The helper functions would be: PyObject *compare_op_0(PyObject *a, PyObject *b) { return PyObject_RichCompare(a, b, 0) } and so on.

Is this the same as this?

If you're worried about the bulk of the generated code, I can open an issue for someone to make a follow up PR to generate function templates from the bytecodes.c file, similar to what we already do for the tail calling interpreter. This would allow us to just call the function. That should go into a follow-up commit though, because it requires changes that affect more than just the optimizer (it will also touch executor_cases and such).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants