Representing the zero point shift of a couple of tosa ops is still being worked on – iirc, it is blocked by some cleanup in linalg that is planned but just not yet gotten to. @gysit@rsuderman
I believe there is a separate set of patterns for lowering rescale that you need to opt in to (the most efficient wear to represent it is target specific).
I am working on linalg extensions to improve the support for scalar parameters. Together with other developments this extension should enable support for more complex operations. So I expect some progress over the course of the next weeks.
I ended up here while looking for ideas on how to lower a quantized TFLite model. When I run mlir-opt -pass-pipeline="builtin.module(func.func(tosa-to-tensor, tosa-to-linalg-named, tosa-to-linalg, tosa-to-arith))" on an MLIR file generated starting from a quantized TFLite model, I get an error: custom op 'tosa.conv2d' has no custom assembly form.
Is this because I am not supposed to go through linalg for quantized TOSA ops? What is the alternative?
Seems you have a version mismatch in MLIR version - e.g., the textual format of MLIR you are producing and consuming doesn’t match and the version skew is resulting in an error.
Trying with mlir-opt from the same version I used to generate the higher-level representations gives me a different error: error: failed to legalize operation 'tosa.conv2d' and little more insight into what is wrong (it just prints the instruction where it is failing).
Could you get the stack trace for the assert failure? (You probably will want to build tf-opt with line numbers enabled, also pity MLIR reproducers arent enabled here else this would be easier/can avoid python)