1

In this question I've seen an example of convolution by the kernel with the shape bigger than initial image's one:

import numpy as np
from scipy import signal

x = np.array([(0.51, 0.9, 0.88, 0.84, 0.05), (0.4, 0.62, 0.22, 0.59, 0.1), (0.11, 0.2, 0.74, 0.33, 0.14), (0.47, 0.01, 0.85, 0.7, 0.09), (0.76, 0.19, 0.72, 0.17, 0.57)])

y = np.array([(0, 0, 0.0686, 0), (0, 0.0364, 0, 0), (0, 0.0467, 0, 0), (0, 0, 0, -0.0681)])

gradient = signal.convolve2d(np.rot90(np.rot90(y)), x, 'valid')

So, we get this:

array([[ 0.044606, 0.094061], [ 0.011262, 0.068288]])

I understand that y is flipped by 180 degress. But how does "valid" convolution work here? When we can get (2x2) shape from (4x4) convolved by (5x5)?

thehemen
  • 55
  • 5

1 Answers1

1

Is your question how can we perform convolution operation with kernel bigger than input image?

If yes then answer is padding. TL;DR you increase the size of the original image with boundary pixels so that kernel can "fit"

Noah Weber
  • 5,669
  • 1
  • 12
  • 26
  • 1
    Thanks for your answer. After padding, it appeared the result of convolution also needs to be rotated by 180°. – thehemen Dec 19 '19 at 12:35