lapland-uas-tequ / tequ-setup-triton-inference-server Goto Github PK
View Code? Open in Web Editor NEWConfigure NVIDIA Triton Inference Server on different platforms. Deploy object detection model in Tensorflow SavedModel format to server. Send images to server for inference with Node-RED. Triton Inference Server HTTP API is used for inference.
License: MIT License