In a typical neural network , bias is usally added like this
v = activation ( w1*x1+ ... + Wb*b)
However, I am not really sure how it is done in convolutional layer. My one thought is that it is added with each convoltional operation for a neuron. Is that correct?