Maybe this is a silly way to compare networks, but I would like to compare several networks based on the number of parameters (learnable features) needed in each one. This is with regards to signal classification. Most networks take in a 2x128 or 2x1024 or 2x32768 (etc) signal input. You can think of this as a 2x pixel image. What I am noticing is that networks that have large inputs tend to have more parameters. I'd like to get an idea of how 'efficiently' a given network is using its parameters.
Is there an industry standard for normalizing a networks parameters based on the input image/input feature map?
Does anyone know off hand of other networks which have input images of 64k (any shape, or any type of image classifier)?
Essentially, I designed a network which uses a 64k feature input. I'd like to know if the 2 Million learnable parameters needed for this network is excessive or reasonable. Most other signal classification networks have smaller feature inputs and require less learnable parameters overall. Thus, I am having difficultly comparing them.
Thank you!