Currently experimenting with exploration policies for deep RL on Super Mario Bros – Agent beats all levels I threw at it

Currently experimenting with exploration policies for deep RL on Super Mario Bros - Agent beats all levels I threw at it

I’ve been playing with deep reinforcement learning for a while. I originally started with a simple DQN, added all improvements from the Rainbow paper, and finally changed C51 for a quantile regression (and plan to swap it for an Implicit Quantile Network).

After implementing C51 (which was my first time with distributional RL) I started playing with policies that take advantage of the learned distributions : By independently taking N samples from each action-value distribution, scoring actions by averaging the samples, and picking the greedy action with respect to these scores, I was able to make the agent learn faster than similar agents using only NoisyNets or an epsilon-greedy policy (I’m still using NoisyNet, this is done on top of it). In the limiting cases, N=1 is just Thompson Sampling and N=+Infinity is just a plain greedy policy.

Finding an optimal value for N proved to be a challenge, so I decided to pick a random value for it at the start of each episode (N = 2**rng.uniform(8,12) for a QR-DQN with 32 quantiles/action works well in my experiments), which led to even better results.

I later found out about DLTV which made the agent discover new behaviors, but performed worse than previous experiments overall. Inspired by it, I tried something I did not find in previous works and got the best results out of all my previous experiments :

At each time step, compute an exploration_score as the ratio of “intra-action variance” over “inter-action variance” (rendered latex equation). I then take N/exploration_score samples from each distribution, and pick an action as described above. (more details at the end of this post)

For anyone reading this, I have a few questions :

  1. Are you aware of any previous work I missed that tries similar exploration policies with distributional RL (interpolating between Thompson sampling and the greedy policy)
  2. Most papers I found about learning from multiple exploration policies seem to be in the context of multi-actor parallelization. Is there any novelty in randomizing the policy parameters at the start of each episode, especially in the single-actor case ?
  3. Is any part of what I’m doing worth the time it would take to quantitatively evaluate it ? I’ve been doing it mainly for learning and fun and have only qualitatively evaluated it so far. However, if there’s a chance I can contribute to the field, I’ll gladly make some time to compare it to published papers on ALE.

I actually track a moving average and standard deviation of the exploration score, which lets me shift/rescale its values to a target average and standard deviation, and divide N by the shifted/rescaled value. I initially started with a target average of 1 and standard deviation of 1 as well (which gave good results), then tried randomizing these parameters at the start of each episode as well. This led to a lot more diversity in the policies and even better results.

Since this worked so well, I additionally randomized the noise strength in the NoisyNet layers.

Overall, this made the agent a lot more robust to deviating from what it considers to be the optimal trajectory, and allowed it to learn complex behaviors previous iterations were never able to learn (e.g. taking a few steps back to gain momentum, waiting for good cycles, or dodging hammer bros)

For anyone interested, I made a live stream of the training in progress with graphs and some more details on the experiments I’m running. The current training run was started 8 days ago, and the agent is able to finish all stages (it’s not finishing them all every try though)

Edit : formatting

submitted by /u/pcouy
[link] [comments]

Liked Liked