Skip to content

Tier 2 abstract interpreter for the optimizer #107557

Closed
@Fidget-Spinner

Description

@Fidget-Spinner

Feature or enhancement

(Mega issue) Start doing optimization passes on tier 2 bytecode.

Pitch

The target of tier 2 optimizations is tier 2 bytecode. Abstract interpretation is a natural way to analyze and optimize tier 2 bytecode. We can generate the abstract interpreter from the bytecode DSL

Previous discussion

See faster-cpython/ideas#611

Todo list:

  • Generate a barebones tier 2 abstract interpreter from the DSL.
  • Set up the abstract interpreter. This requires that we pass in the runtime state at the point of entering the executor.
  • Perform partial evaluation. For now, all static comes from LOAD_CONST. Assume everything else is dynamic.

The initial partial evaluation will be bad because it does not have that much static information (we need watchers for methods, functions, global, etc., to make them effectively static). However, that phase can be done in the region formation step before partial evaluation. Someone else can pick them up as a parallel workstream.

Linked PRs

Metadata

Metadata

Assignees

No one assigned

    Labels

    interpreter-core(Objects, Python, Grammar, and Parser dirs)performancePerformance or resource usagetype-featureA feature request or enhancement

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions