is DQN still worth in 2026?

by worth, i mean, not only in introductory learning context.

I think the answer is depending on a target business problem.

honestly almost practical RL business problems require a continuous state/action space, so DQN is not competitive.

but for example, in video games, will value learning methods still work effectively even compared to policy gradient and/or actor-critic methods? (assumption: the input is not raw pixel data, the reward is neither sparse nor raw score.)

submitted by /u/Gloomy-Status-9258
[link] [comments]

Liked Liked