Sure. To custom the model, you need to choose which inferencing method do you want. We have provided nvinfer DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums and nvinferserver DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums.
And you must know which preprocessing and postprocessing the model needs.
There are lots of samples in NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com)