- The official website shows that NVIDIA Orin has a computing power of 254 TOPS. Is this 254 TOPS specifically the performance of the GPU Tensor Core at INT8 sparse precision, or is it non-Tensor Core?
- Where can I find detailed performance metrics, including whether the computing power value is Tensor Core computing power, whether the computing power is sparse or dense precision, as well as FP16, FP8, etc.?
Hi,
Here are some suggestions for the common issues:
1. Performance
Please run the below command before benchmarking deep learning use case:
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
2. Installation
Installation guide of deep learning frameworks on Jetson:
- TensorFlow: Installing TensorFlow for Jetson Platform - NVIDIA Docs
- PyTorch: Installing PyTorch for Jetson Platform - NVIDIA Docs
We also have containers that have frameworks preinstalled:
Data Science, Machine Learning, AI, HPC Containers | NVIDIA NGC
3. Tutorial
Startup deep learning tutorial:
- Jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson
- TensorRT sample: Jetson/L4T/TRT Customized Example - eLinux.org
4. Report issue
If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.
Thanks!
Hi,
The link you shared is DRIVE AGX.
AGX Orin 64GB peak perf is 275 TOPs. You can find some details below:
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.