1

I recently came across an article about RNNs here. Which describes different types of RNNs like:

enter image description here

The first figure makes sense. A regular feedforward network.

The second is a big question for me. Is it one timestep cloned three times to make it recurrent with 3 timesteps?

The third is returning just the last timestep's output.

The fourth is an even bigger question. Is it getting the last timestep's output, clone it three times, and put another RNN layer on it? Or it's two RNN layers, where the output of the first layer is the input of the second (returning all timestep's outputs)?

The fifth makes sense as well, it returns the output of all timesteps.

So am I missing something, or the second, and fourth cases can only be made by cloning inputs/outputs as specified above?

1 Answers1

1

I think this will help.

enter image description here

My understanding is that one-to-many and many-to-many(like in 4th case of your pictorial) are in a way similar to autoregressive networks, where you utilize the prediction you've made and use it predict further ahead.

sai
  • 219
  • 1
  • 5
  • One-to-many has started to make sense, but the second many-to-many still doesn't. If $T_x$ is $T_2$ for example, so it has 2 timesteps, in the first two timesteps, we just ignore the outputs, and for 3rd, and 4th we just replace the inputs to 0s? – Gergő Horváth Oct 01 '20 at 09:43
  • No, I think it's just matter of visual representation here. In my head the second half of the second many to many(bottom right corner) can/should be replaced by the one-to-many representation – sai Oct 01 '20 at 10:02