Researchers Find Standard RL Optimization Loses Critical Signal in Multi-Reward Training
Standard RL methods collapse critical information when optimizing multiple rewards. GDPO fixes this by normalizing each reward independently, enabling stable, balanced multi-objective learning across tasks.
Like
0
Liked
Liked