111

This is a small conceptual question that's been nagging me for a while: How can we back-propagate through a max-pooling layer in a neural network?

I came across max-pooling layers while going through this tutorial for Torch 7's nn library. The library abstracts the gradient calculation and forward passes for each layer of a deep network. I don't understand how the gradient calculation is done for a max-pooling layer.

I know that if you have an input ${z_i}^l$ going into neuron $i$ of layer $l$, then ${\delta_i}^l$ (defined as ${\delta_i}^l = \frac{\partial E}{\partial {z_i}^l}$) is given by: $$ {\delta_i}^l = \theta^{'}({z_i}^l) \sum_{j} {\delta_j}^{l+1} w_{i,j}^{l,l+1} $$

So, a max-pooling layer would receive the ${\delta_j}^{l+1}$'s of the next layer as usual; but since the activation function for the max-pooling neurons takes in a vector of values (over which it maxes) as input, ${\delta_i}^{l}$ isn't a single number anymore, but a vector ($\theta^{'}({z_j}^l)$ would have to be replaced by $\nabla \theta(\left\{{z_j}^l\right\})$). Furthermore, $\theta$, being the max function, isn't differentiable with respect to it's inputs.

So....how should it work out exactly?

shinvu
  • 1,240
  • 2
  • 9
  • 7

5 Answers5

111

There is no gradient with respect to non maximum values, since changing them slightly does not affect the output. Further the max is locally linear with slope 1, with respect to the input that actually achieves the max. Thus, the gradient from the next layer is passed back to only that neuron which achieved the max. All other neurons get zero gradient.

So in your example, $\delta_i^l$ would be a vector of all zeros, except that the $i^{*^{th}}$ location will get a values $\left\{\delta_j^{l+1}\right\}$ where $i^* = argmax_{i} (z_i^l)$

Archana David
  • 1,249
  • 5
  • 22
abora
  • 1,228
  • 1
  • 9
  • 3
  • 17
    Oh right, there is no point back-propagating through the non-maximum neurons - that was a crucial insight.

    So if I now understand this correctly, back-propagating through the max-pooling layer simply selects the max. neuron from the previous layer (on which the max-pooling was done) and continues back-propagation only through that.

    – shinvu May 13 '16 at 05:35
  • But don't you need to multiply with the derivative of the activation function? – Jason Mar 19 '18 at 04:02
  • 2
    @Jason: The max function is locally linear for the activation that got the max, so the derivative of it is constant 1. For the activations that didn't make it through, it's 0. That's conceptually very similar to differentiating the ReLU(x) = max(0,x) activation function. – Chrigi Feb 05 '19 at 12:48
  • What is the stride is less than the kernel width for max pooling ? – DoOrDoNot Mar 04 '19 at 05:52
  • 4
    Great answer! What about the edge case where multiple entries have the same max value (for example 2 values have 0 from a ReLU, and the other two are negative)? – DankMasterDan Apr 23 '19 at 17:11
  • 3
    @DankMasterDan After some experimentation, it looks like tensorflow will pick the first entry with the max value. – Swier Dec 04 '19 at 20:42
  • 2
    @DankMasterDan It's also valid to just not give any gradient to these values (verified through grad checking). The only thing you shouldn't do is pass gradients back to all values that match the max. – Recessive Jan 28 '20 at 06:05
  • @Recessive Intuitively, I would think that the gradients should be passed back to all the values that match the max because all of these contributed to the max output. I would think that the gradient should be averaged out amongst all the values that match the max. Why this should not be done? – Gaurav Srivastava May 10 '22 at 14:34
  • @GauravSrivastava I'm not 100% why, but when I passed the gradient to all values that matched the max, the gradients were wrong when I checked them. Intuitively, I think it's because the values that match the max don't all contribute, it's not an aggregate function, it's a max function, so in reality only one of them has to contribute – Recessive May 12 '22 at 08:23
  • in my experiment, tensorflow divides the gradient equally to all max entries. – Sang Jun 16 '22 at 07:50
7

Max Pooling

So suppose you have a layer P which comes on top of a layer PR. Then the forward pass will be something like this:

$ P_i = f(\sum_j W_{ij} PR_j)$,

where $P_i$ is the activation of the ith neuron of the layer P, f is the activation function and W are the weights. So if you derive that, by the chain rule you get that the gradients flow as follows:

$grad(PR_j) = \sum_i grad(P_i) f^\prime W_{ij}$.

But now, if you have max pooling, $f = id$ for the max neuron and $f = 0$ for all other neurons, so $f^\prime = 1$ for the max neuron in the previous layer and $f^\prime = 0$ for all other neurons. So:

$grad(PR_{max\ neuron}) = \sum_i grad(P_i) W_{i\ {max\ neuron}}$,

$grad(PR_{others}) = 0.$

patapouf_ai
  • 416
  • 5
  • 11
6

@Shinvu's answer is well written, I would like to point to a video that explains the gradient of Max() operation and this within a computational graph which is quick to grasp.!

while implementing the maxpool operation(a computational node in a computational graph-Your NN architecture), we need a function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). We keep track of the position of the max because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost.

Anu
  • 328
  • 4
  • 10
1

Maybe it is easier to understand the derivative of pooling layer after we write down the matrix format of function max{x}=xW, where x is a tensor. Let us see an example with four different values, x = [x1, x2, x3, x4] and x1 is the largest value, such that max{x} = x1. Now, max{x} can be written down as matrix multiplication, such that xW or Wx', where W = [I(x1>x2)*I(x1>x3)*I(x1>x4), I(x2>x1)*I(x2>x3)*I(x2>x4), I(x3>x1)*I(x3>x2)*I(x3>x4), I(x4>x1)*I(x4>x2)*I(x4>x3)]' = [1, 0, 0, 0]', where I(.) is the identification function to compare two values. Normally, there are two derivatives --- dWx/dx =W' (W transpose) and dWx/dW = x' --- required in the backpropagation algorithm to update the previous and the current layer's weights or biases, respectively. However, in the case of max pooling, there is no need to update W. Because we can and have already written down the closed-form of max pooling layer function, that is W=[I(x1>x2)*I(x1>x3)*I(x1>x4), I(x2>x1)*I(x2>x3)*I(x2>x4), ...]'. Now to find out dWx/dx, we have dWx/dx =W' = [1, 0, 0, 0], and W' can then be inserted as one member in the derivative chain suitably.

In general, the max{x} can be written down as a linear function, such that Wx (or xW), where W is a matrix whose entry is a production of a set of identification functions, like I(x1>x2)*I(x1>x3)*I(x1>x4). One property of W is that, in one column, there is only one entry is 1 and others are all 0. Since this linear function's weights have been determined, and there is no need to use gradient descent to update it anymore. As for the case of using max{x} to update the previous layers' weights and biases using the chain rule, we know that dWx/dx =W'.

Kun Qiu
  • 11
  • 2
0

In my case, although intuitive, I was unable to see why the derivative of the max pooling function equals one for the pooled value. Searching for "derivative of the max pooling function" did not help. But understanding the two variables case did.

For completion, I will discuss the max pooling derivative. First, we consider max pooling as a multivariable function $f$ of the filter map values $f(x_1, \cdots, x_n) = max(x_1, \cdots, x_n)$. Also, we will assume that all values are different (no two max values, see here why I make this simplification). Now, it is better to write the function in bracket notation:

\begin{equation}\label{eq:max_pooling_backprop} f(x_1, \cdots, x_n) = max(x_1, \cdots, x_n) = \begin{cases} x_1 & \text{if } x_1 > x_2, x_1 > x_3, \cdots x_1 > x_n\\ x_2 & \text{if } x_2 > x_1, x_2 > x_3, \cdots x_2 > x_n\\ \vdots & \vdots \quad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \quad\vdots\\ x_n & \text{if } x_n > x_1, x_n > x_3, \cdots x_n > x_{n - 1} \end{cases} \end{equation}

To see why (citing abora above)

the max is locally linear with slope 1, with respect to the input that actually achieves the max,

one can find the partial derivatives of $f$ with respect to $x_1$ (without loss of generality), say:

\begin{equation} \frac{\partial{f}}{x_1} = \begin{cases} 1 & \text{if } x_1 > x_2, x_1 > x_3, \cdots x_1 > x_n\\ 0 & \text{otherwise}, \end{cases} \end{equation}

where is now clear that the partial derivative (gradient) $\frac{\partial{f}}{x_1}$ equals $1$, when the max pooled value was $x_1$. Similarly, $f(x_1, \cdots, x_n) = x_1$ if $x_1$ was the max pooled value. Therefore, $\frac{\partial{f}}{x_2} = \frac{\partial{f}}{x_3} = \cdots = \frac{\partial{f}}{x_n} = 0$.

neoglez
  • 1
  • 1