[D] What’s the modern workflow for managing CUDA versions and packages across multiple ML projects?
Hello everyone,
I’m a relatively new ML engineer and so far I’ve been using conda for dependency management. The best thing about conda was that it allowed me to install system-level packages like CUDA into isolated environments, which was a lifesaver since some of my projects require older CUDA versions.
That said, conda has been a pain in other ways. Package installations are painfully slow, it randomly updates versions I didn’t want it to touch and breaks other dependencies in the process, and I’ve had to put a disproportionate amount of effort into getting it to do exactly what I wanted.
I also ran into cases where some projects required an older Linux kernel, which added another layer of complexity. I didn’t want to spin up multiple WSL instances just for that, and that’s when I first heard about Docker.
More recently I’ve been hearing a lot about uv as a faster, more modern Python package manager. From what I can tell it’s genuinely great for Python packages but doesn’t handle system-level installations like CUDA, so it doesn’t fully replace what conda was doing for me.
I can’t be the only one dealing with this. To me it seems that the best way to go about this is to use Docker to handle system-level dependencies (CUDA version, Linux environment, system libraries) and uv to handle Python packages and environments inside the container. That way each project gets a fully isolated, reproducible environment.
But I’m new to this and don’t want to commit to a workflow based on my own assumptions. I’d love to hear from more experienced engineers what their day-to-day workflow for multiple projects looks like.
submitted by /u/sounthan1
[link] [comments]