I’ve modified the SampleOnnxMNIST C++ project to load a custom model (and have changed the input and output node names), but the binary fails at ConstructNetwork throwing the following error
Here is the model file and my slightly changed SampleUffMNIST.cpp file.
I’ve tried the following 2 approaches to get the onnx model file in TensorFlow 2.5 using tf2onnx
TF checkpoint → pb → onnx
TF checkpount → SavedModel → onnx
With both approaches, to get onnx model, I seem to be running into the same error. I am clueless as to where I am going wrong. Any help would be greatly appreciated.
Steps To Reproduce
This c++ file can be replaced in the place of the one at ‘TensorRT-8.0.1.6\samples\sampleOnnxMNIST’
and the model.onnx file is expected to be in ‘TensorRT-8.0.1.6\data’
This project was built using Visual Studio 2017. Upon successfully building, ‘sample_onnx_mnist.exe’ is generated in the bin folder which can be run from command line (no args required)
We could reproduce the issue. Looks like the model has dynamic channel size for the convolution layers, which TRT does not support yet. Please allow us some time to work on this.
Hi @ashwin.kannan3
Google drive links are not accessible anymore in order to access your model and code.
Could you please give us the access so we can better help?