[D] ran controlled experiments on meta’s COCONUT and found the “latent reasoning” is mostly just good training. the recycled hidden states actually hurt generalization

COCONUT (Hao et al., 2024) claims models can reason in latent space by recycling hidden states instead of writing chain-of-thought tokens. it gets ~97% on ProsQA vs ~77% for CoT. nobody controlled for the obvious alternative… maybe the multistage curriculum training is doing all the work? the recycled hidden states are along for the ride. Hao et al. showed COCONUT needs the curriculum (76.1% without it vs 97.0% with it). nobody tested the inverse. does the curriculum need COCONUT? if you run the same 7-stage curriculum but replace recycled hidden states with a fixed learned embedding that carries zero information between steps, do you lose anything?

i built the control to test this all out. trained four models on ProsQA (GPT-2 124M, rented lambda H100):

  • M1 – CoT baseline (no curriculum)
  • M2 – COCONUT (meta’s architecture, recycled hidden states)
  • M3 – same curriculum, but thought tokens are a fixed learned embedding. no recycled content
  • M4 – fixed embeddings and multi-pass processing (factorial control isolating recycled content vs sequential processing)

if recycled hidden states carry reasoning information, M3 should perform significantly worse than M2.
from what i tested, it didn’t. M2: 97.0%. M3: 96.6%. McNemar p = 0.845. the curriculum gets you there without recycling.

if the recycling mechanism is doing meaningful work beyond what the curriculum provides, M3 should perform significantly worse than M2.

it doesn’t. M2: 97.0%. M3: 96.6%. McNemar p = 0.845. the curriculum is sufficient without recycling. this is the control Hao et al. didn’t run. same curriculum, mechanism removed.

it got worse for COCONUT on OOD. on 7-hop chains (trained on 3-6), M4 beats M2 by 10.9pp (p < 0.001). recycled content actively hurts chain-length extrapolation. meanwhile, sequential processing drives DAG generalization. M4 beats M3 by 7.9pp. the factorial decomposition cleanly separates these two effects.

the kicker… M2 is more confident than M4 on OOD tasks where M4 is more accurate. recycled content doesn’t help. it creates overconfidence on out-of-range inputs.

additional converging evidence (corruption analysis, linear probing, cross-model transplantation) plus all raw data in the repos below.

limitations: single seed, GPT-2 scale, ProsQA only. i just don’t have the money to keep going at this point.

I’ve been running this on rented GPU time and would like to continue if the community finds this direction useful. looking for feedback:

  1. confounds I’m missing?
  2. highest-value next step — multi-seed, scale up, different tasks?

paper (pdf) -> https://github.com/bmarti44/research-pipeline/blob/main/papers/coconut_curriculum_dissection/manuscript/output/manuscript.pdf

code -> https://github.com/bmarti44/research-pipeline/tree/main/papers/coconut_curriculum_dissection

checkpoints and data -> https://huggingface.co/bmarti44/coconut-curriculum-checkpoints

submitted by /u/bmarti644
[link] [comments]

Liked Liked