Thinking Deeper, Not Longer: Depth-Recurrent Transformers for Compositional Generalization [R]
Paper:
https://arxiv.org/abs/2603.21676
I found this interesting as another iteration of the TRM approach:
- Shows decent OOD generalization in 2/3 tasks
- (but why does this fail >2x? and why is unstructured text so much worse?)
- Explains why intermediate step supervision can hurt generalization.
- This makes statistical heuristics “irresistible” to the model, impairing investment in genuine “reasoning.”
- I buy this, and would go further to assert it captures the (insidious) weaknesses of foundation models, and maybe even explains the trap expert humans fall into, when they rely on their (expansive) experience to generate intuition, vs. thinking through a situation with less heuristics and more explicit reasoning.
submitted by /u/marojejian
[link] [comments]
Like
0
Liked
Liked