[P] A lightweight FoundationPose TensorRT implementation

After being frustrated with the official FoundationPose codebase for my robotics research, I built a lightweight TensorRT implementation and wanted to share it with the community.

The core is based on model code from tao-toolkit-triton-apps, but with the heavy Triton Inference Server dependency completely removed in favor of a direct TensorRT backend. For the ONNX models, I use the ones from isaac_ros_foundationpose, since I ran into issues with the officially provided ones. So essentially it’s those two sources combined with a straightforward TensorRT backend.

Some highlights:

  • Reduced VRAM usage – You can shrink the input layer of the network, lowering VRAM consumption while still running the standard 252 batch size by splitting inference into smaller sequential batches.
  • Minimal dependencies – All you need is CUDA Toolkit + TensorRT (automatically set up via a script I provide) + a Python environment with a handful of packages.

I spent a long time looking for something like this without luck, so I figured some of you might find it useful too.

https://github.com/seawee1/FoundationPose-TensorRT

submitted by /u/seawee1
[link] [comments]

Liked Liked