5

I already know deep RL, but to learn it deeply I want to know why do we need 2 networks in deep RL. What does the target network do? I now there is huge mathematics into this, but I want to know deep Q-learning deeply, because I am about to make some changes in the deep Q-learning algorithm (i.e. invent a new one). Can you help me to understand what happens during executing a deep Q-learning algorithm intuitively?

nbro
  • 40,472
  • 12
  • 105
  • 192
dato nefaridze
  • 872
  • 8
  • 20

1 Answers1

7

In DQN that was presented in the original paper the update target for the Q-Network is $\left(r_t + \max_aQ(s_{t+1},a;\theta^-) - Q(s_t,a_t; \theta)\right)^2$ were $\theta^-$ is some old version of the parameters that gets updated every $C$ updates, and the Q-Network with these parameters is the target network.

If you didn't use this target network, i.e. if your update target was $\left(r_t + \max_aQ(s_{t+1},a;\theta) - Q(s_t,a_t; \theta)\right)^2$, then learning would become unstable because the target, $r_t + \max_aQ(s_{t+1},a;\theta)$, and the prediction, $Q(s_t,a_t; \theta)$, are not independent, as they both rely on $\theta$.

A nice analogy I saw once was that it is akin to a dog chasing it's own tail - it will never catch it because the target is non-stationary; this non-stationarity is exactly what the dependence between the target and the prediction cause.

nbro
  • 40,472
  • 12
  • 105
  • 192
David
  • 4,889
  • 1
  • 8
  • 28
  • 1
    Given that this is a duplicate but your answer is valuable, I think it would have been better to write the answer under https://ai.stackexchange.com/q/6982/2444. – nbro Jul 16 '20 at 03:16