[R] On the Structural Limitations of Weight-Based Neural Adaptation and the Role of Reversible Behavioral Learning
Hi everyone, I recently uploaded a working paper on the arXiv and would love some feedback.
The working paper examines a potential structural limitation in the ability of modern neural networks to learn. Most networks update in response to new experiences through changes in weights, which means that learned behaviors are tightly bound with the network’s parameter space.
The working paper examines the concept of whether some of the problems with continual learning, behavioral control, and safety might be a function of the weight-centric learning structure itself, rather than the methods used to train those models.
as a conceptual contribution, I explore a concept I call Reversible Behavioral Learning, in which learned behaviors might be thought of more in terms of modular behaviors that might be potentially added or removed without affecting the underlying model.
It’s a very early research concept, and I would love some feedback or related work I might have missed.
submitted by /u/Sad_State_431
[link] [comments]