[D] CUDA Workstation vs Apple Silicon for ML / LLMs

Hi everyone,

I’m trying to make a deliberate choice between two paths for machine learning and AI development, and I’d really value input from people who’ve used both CUDA GPUs and Apple Silicon.

Context

I already own a MacBook Pro M1, which I use daily for coding and general work.

I’m now considering adding a local CUDA workstation mainly for:

  • Local LLM inference (30B–70B models)
  • Real-time AI projects (LLM + TTS + RVC)
  • Unreal Engine 5 + AI-driven characters
  • ML experimentation and systems-level learning

I’m also thinking long-term about portfolio quality and employability (FAANG / ML infra / quant-style roles).

Option A — Apple Silicon–first

  • Stick with the M1 MacBook Pro
  • Use Metal / MPS where possible
  • Offload heavy jobs to cloud GPUs (AWS, etc.)
  • Pros I see: efficiency, quiet, great dev experience
  • Concerns: lack of CUDA, tooling gaps, transferability to industry infra

Option B — Local CUDA workstation

  • Used build (~£1,270 / ~$1,700):
    • RTX 3090 (24GB)
    • i5-13600K
    • 32GB DDR4 (upgradeable)
  • Pros I see: CUDA ecosystem, local latency, hands-on GPU systems work
  • Concerns: power, noise, cost, maintenance

What I’d love feedback on

  1. For local LLMs and real-time pipelines, how limiting is Apple Silicon today vs CUDA?
  2. For those who’ve used both, where did Apple Silicon shine — and where did it fall short?
  3. From a portfolio / hiring perspective, does CUDA experience meaningfully matter in practice?
  4. Is a local 3090 still a solid learning platform in 2025, or is cloud-first the smarter move?
  5. Is the build I found a good deal ?

I’m not anti-Mac (I use one daily), but I want to be realistic about what builds strong, credible ML experience.

Thanks in advance — especially interested in responses from people who’ve run real workloads on both platforms.

submitted by /u/Individual-School-07
[link] [comments]

Liked Liked