Three Dogmas of Reinforcement Learning (Abel et al., 2024)
|
Watch David Abel present “Three Dogmas of RL”, joint work with Mark Ho and Anna Harutyunyan. He begins by arguing that RL still lacks a first-principles definition of an agent, and then lays out three “dogmas” in modern RL:
Read the summary post here: https://sensorimotorai.github.io/2026/03/05/threedogmasrl/ I like this work, because it tries to take vague concepts like the reward hypothesis, and pin down their exact mathematical commitments. One of the takeaways is that representing goals with a single scalar reward requires fairly restrictive axioms, which people often violate in practice. Curious what people here think. submitted by /u/vafaii |
Like
0
Liked
Liked