DQN Maze Solver Converging to Horrible Policy

I am teaching a robot how to “solve” a maze using DQN. For weeks now it has been converging to possibly the worst policy it possibly could which is to drive backwards into a wall no matter what and accrue enormous negative rewards.

I have modulated an enormous amount of variables, hyper-parameters, changed neural network size, drastically altered reward structure in various ways, tried different state inputs, tons of initial exploration, given it memory, made the optimal policy extremely simple to find, etc but, without fail, it consistently converges to literally just driving backwards in a line until it smashes into a wall.

I would heavily appreciate if anyone has any input on this. I’ve tried everything that is obvious to me and I truly don’t know where to even search for the source of this behavior anymore.

Edit: I set my reward function equal to 0 for all states and actions and observed that it still converges to wall hitting even without any type of reward shaping. Going to look into this soon.

submitted by /u/aidan_adawg
[link] [comments]

Liked Liked