Description
Hi:
I tried to test BackgroundMattingV2 's onnx model on TensorRT platform, but the function “nvonnxparser::IParse::parse()” return fail,and trt report below errors:
TensorRT_ERROR: Parameter check failed at: Layers.cpp::nvinfer1::TopKLayer::TopKLayer::3528, condition: k > 0 && k <= MAX_TOPK_K
TensorRT_INTERNAL_ERROR: Assertion failed: mParams.k > 0
C:\source\builder\Layers.cpp:3563
Aborting…
so, how can I fix these erros?
Environment
TensorRT Version : v7.2.3.4
GPU Type : GTX2070
Nvidia Driver Version :
CUDA Version : 11.1
CUDNN Version : 8.1.1.3
Operating System + Version : Windows 10
Python Version (if applicable) :
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
Exact steps/commands to build your repro
Exact steps/commands to run your repro
Full traceback of errors encountered
NVES
July 10, 2021, 5:07pm
2
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
testBackgroundMattingV2.cpp (4.5 KB)
onnx_mobilenetv2_hd.onnx (19.1 MB)
trtexec_log.txt (279.7 KB)
Ok, I had upload the onnx model,my cpp code to load model, and the trtexec report, please check it,thanks!
by the way, the TensorRT and cuDNN version I filled before is wrong, I had rewrite
@pango99 ,
Please refer below doc and make sure you’re using K value is 3840 or less and greater than 0.
And also recommend you to try on latest TensorRT version. Please let us know if you still face this issue.
Thank you.
Hi,
I had export their original pth model to onnx format on pytorch v1.9.0 with different configs,but regardless of which of the exported model, the “parse()” function always report below errors:
Input filename: G:\AI\PretrainedModel\BackgroundMattingV2\Onnx\resnet101.onnx
ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.9
Domain:
Model version: 0
Doc string:
TensorRT_WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT_WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
TensorRT_ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterND version 1
ERROR: builtin_op_importers.cpp:3773 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: false, file G:\VC15\QuickBroadCast\test-app\testBackgroundMattingV2\testBackgroundMattingV2.cpp, line 115
I also download the newest v8.0.1 TensorRT to do my test,but “parse()” function report below errors:
TensorRT_WARNING: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT_ERROR: [graph.cpp::nvinfer1::builder::Node::computeInputExecutionUses::519] Error Code 9: Internal Error (Floor_15: IUnaryLayer cannot be used to compute a shape tensor)
TensorRT_ERROR: ModelImporter.cpp:720: While parsing node number 28 [Resize → “412”]:
TensorRT_ERROR: ModelImporter.cpp:721: — Begin node —
TensorRT_ERROR: ModelImporter.cpp:722: input: “src”
input: “403”
input: “411”
input: “410”
output: “412”
name: “Resize_28”
op_type: “Resize”
attribute {
name: “coordinate_transformation_mode”
s: “pytorch_half_pixel”
type: STRING
}
attribute {
name: “cubic_coeff_a”
f: -0.75
type: FLOAT
}
attribute {
name: “mode”
s: “linear”
type: STRING
}
attribute {
name: “nearest_mode”
s: “floor”
type: STRING
}
TensorRT_ERROR: ModelImporter.cpp:723: — End node —
TensorRT_ERROR: ModelImporter.cpp:726: ERROR: ModelImporter.cpp:179 In function parseGraph:
1 Like
Is this model still possible to run on TensorRT?
HI @pango99 ,
Looks like you’re using unsupported op in your model, Please refer below docs to check supported operators by TensorRT.
These support matrices provide a look into the supported platforms, features, and hardware capabilities of the NVIDIA TensorRT 8.4.3 APIs, parsers, and layers.
<!--- SPDX-License-Identifier: Apache-2.0 -->
# Supported ONNX Operators
TensorRT 8.4 supports operators up to Opset 17. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL
> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT, clamping values to `+-INT_MAX` or `+-FLT_MAX` if necessary.
See below for the support matrix of ONNX operators in ONNX-TensorRT.
## Operator Support Matrix
| Operator | Supported | Supported Types | Restrictions |
|---------------------------|------------|-----------------|------------------------------------------------------------------------------------------------------------------------|
| Abs | Y | FP32, FP16, INT32 |
| Acos | Y | FP32, FP16 |
| Acosh | Y | FP32, FP16 |
| Add | Y | FP32, FP16, INT32 |
This file has been truncated. show original
Please refer to below links related custom plugin implementation and sample:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMnistCoordConvAC