Link mlir code to shared library and execute

I have a mlir::ModuleOp module in c++ code and there is a function symbol which is implemented in a shared library. This module needs to link to shared library to run. How can I run this module just in time and compile it to an executable?

GPU integration test might be a good example to show the usage.
It includes calls to mgpuMemGetDeviceMemRef1dFloat from libmlir_rocm_runtime.so
and printMemrefF32 from libmlir_runner_utils.so

Basically, you need to declare the callees in the module so func.calls can legalize.

func.func private @mgpuMemGetDeviceMemRef1dFloat(%ptr : memref<?xf32>) -> (memref<?xf32>)
func.func private @printMemrefF32(%ptr : memref<*xf32>)

and

You can either use mlir-cpu-runner with --shared-libs command-line options as in the test
or
llvm dialect → mlir-translate to obtain llvm IR → backend compiler e.g., Clang to compile & link

Thanks, but sorry that my question may not have been described clearly enough. My mlir code is stored in a cpp object mlir::ModuleOp module, and I want to run it without writing this module to a file.
I was using mlir::ExecutionEngine to run this module just-in-time, but I don’t know how to run it with functions linked to libs.

There is support for passing dynamic library paths to the execution engine. You can look at how the -shared-libs flag of the mlir-cpu-runner tool is processed. For example, here: https://github.com/llvm/llvm-project/blob/f827b953ab3206294530685b8b821f1a60f3836c/mlir/lib/ExecutionEngine/JitRunner.cpp#L196

1 Like

Thanks! I’ll try it.