DQN with Catastrophic Forgetting?

Hi everyone, happy new year!

I have a project where I’m training a DQN with stuff relating to pricing and stock decisions.

Unfortunaly, I seem to be running into what seems to be some kind of forgetting? When running the training on a pure random (100% exploration rate) and then just evaluating it (just being greedy) it actually reaches values better than fixed policy.

The problem arises when I left it to train beyond that scope, especially after long enough time, after evaluating it, it has become worse. Note that this is also a very stochastic training environment.

I’ve tried some fixes, such as increasing the replay buffer size, increasing and decreasing the size of network, decreasing the learning rate (and some others that came to my mind to try and tackle this)

I’m not even sure what I could change further? And I’m also not sure if I can just let it also train with pure random exploration policy.

Thanks everyone! 🙂

submitted by /u/DasKapitalReaper
[link] [comments]

Liked Liked