Is there a way to train a neural network as $f(x) = {1 \over x}$ precisely?
Asked
Active
Viewed 2,471 times
3
2 Answers
2
Since the activation function itself can be an inverse function, the answer is yes.

Stephen Rauch
- 1,783
- 11
- 22
- 34

Naman Mehta
- 21
- 1
-
-
@NagabhushanSN Why do you want to learn it with a sigmoid or relu? What is special about those particular non-linear functions compared to any other non-linear function in the world? Given that the only requirement for an activation function is to be non-linear, why sigmoid or relu are revered so much? – Vladislav Gladkikh Oct 26 '20 at 00:11
-
@VladislavGladkikh because they're the most popular ones. In any network that we build, we usually have ReLU activation by default. I want to know if suppose I get to know the output is inversely dependent on a feature, should I invert the feature manually or can the network learn by itself?
I've read that a neural network (even shallow) can approximate any function (theoretically), but I couldn't figure out how it can approximate the inverse function with ReLU activation.
– Nagabhushan S N Oct 26 '20 at 04:39 -
1@NagabhushanSN The Kolmogorov's theorem that you have read about says that any function of $n$ variables can be approximated with a sum of functions of 1 variable. This is how a neural network is constructed. It is a combination (a sum) of neurons (functions of 1 variable $z=\sum w_i x_i$). The theorem doesn't say which functions should be in the neurons: sigmoids, ReLu, or other. I can approximate any function with $1/x$-neurons. If your target function is similar to ReLu, you need just a few ReLu neurons to approx it. If i is very different from ReLu, you need a lot of ReLu neurons. – Vladislav Gladkikh Oct 26 '20 at 07:34
1
Division by zero is not defined, so at 1/0 its not possible to find the gradient and hence the function cannot be exactly approximated by NN

user80942
- 11
- 1
-
1$x=0$ is not in the domain of $f(x)=1/x$, so you wouldn’t need the gradient at $x=0$, but you may need the gradient arbitrarily close to zero. – Joe Jun 07 '20 at 15:38
1/x
over an interval containing zero. I'd doubt you can get the function exactly (again using the common activators) on any interval, but I don't know how to prove that negative assertion. – Ben Reiniger Sep 08 '19 at 04:10