Hybrid MARL + Linear Programming Architecture for Dynamic Vehicle Routing (Zero-Shot Generalization)

Hi everyone,

I wanted to share the architecture of a 2-year project I led: optimizing a line-haul logistics network using a hybrid of Multi-Agent RL (MARL) and Linear Programming (LP).

We were trying to optimize a live and complex delivery network with dynamically arriving requests. We built a hierarchical architecture to get the best of both worlds (standard OR and RL):

  1. The “Fleet Manager” (MARL): PPO agents handle the high-level decision-making. The agent decides which cluster of orders to serve and when to dispatch a truck. It optimizes for long-term reward (utility) and learns to wait for “better” consolidation opportunities (LTL).
  2. The “Dock Worker” (LP Solver): Once the agent selects a cluster, we pass that subset of nodes to a lightweight Linear Programming solver (embedded inside the environment step). The solver handles the actual Bin Packing and TSP routing to ensure that physical constraints are met exactly.

The biggest win was the generalization. By normalizing the observation space (viewing the warehouse as a relative density map rather than absolute coordinates) and applying certain ML “magic tricks” (see the upcoming Part 2), an agent trained on a node could reproduce the success on another without retraining.

I wrote up the full deep dive with architectural diagrams and other details.

Happy to answer any questions about the environmental design, the training itself, or anything you’re interested in particular.

submitted by /u/Aggravating_Excuse81
[link] [comments]

Liked Liked