Is RL post-training in ‘imagined environments’ a path to continual learning? Trying to understand this deeper [D]
I’ve been reading more about training in imagined environments, especially the work of the Dreamer series and RialTo, and I’m curious about how this could apply to CL.
Take an example of a robot deployed in a home that notices it has a high failure rate when picking up a specific object (let’s say cans in a kitchen). It then builds a world model of the kitchen from it’s deployment data, generates can-grasping rollouts within it and RL post-trains in the imagined env, then deploys the new policy.
This feels like continual learning to me? But formal continual learning seems to be more about task sequences (learn A, then learn B, then measure forgetting on A) and the example I’m describing doesn’t fit into that. I’m not sure if what i’m describing is deployment-time adaptation, imagined replay for CL, self-improvement loops, or some mix.
Two things I’d like takes on:
- Is anyone updating the world model itself continually from deployment data, not just the policy? Most of what I’ve read keeps the world model frozen post-training.
- What breaks first when you actually try the closed loop (deploy → world model update → imagined rollouts → policy update → deploy)? My guess is world model drift compounds but haven’t seen it characterized.
Curious what others think.
submitted by /u/No_Bat_7448
[link] [comments]