Similar to this question about MLPClassifier, I suspect the answer is 'no' but I will ask it anyway.
Is it possible to change the activation function of the output layer in an MLPRegressor neural network in scikit-learn?
I would like to use it for function approximation. I.e.
y = f(x)
where x is a vector of no more than 10 variables and y is a single continuous variable.
So I would like to change the output activation to linear or tanh. Right now it looks like sigmoid.
If not, I fail to see how you can use scikit-learn for anything other than classification which would be a shame.
Yes, I realise I could use tensorflow or PyTorch but my application is so basic I think scikit learn would be perfect fit (pardon the pun there).
Is it possible to build a more customized network with MultiLayerPerceptron or perhaps from individual layers (sknn.mlp)?
UPDATE:
In the documentation for MultiLayerPerceptron it does say:
For output layers, you can use the following layer types: Linear or Softmax.
But then further down it says:
When using the multi-layer perceptron, you should initialize a Regressor or a Classifier directly.
And there is no example of how to instantiate a MultiLayerPerceptron object.
'identity'
. ButMLPRegressor
object has no attributeout_activation_
so I guess it isn't exposed as an attribute. Identity (linear activation) may be fine for what I need. Is there an easy way to confirm what activation it is? – Bill Apr 27 '18 at 15:59is_classifier
methodfrom ..base import
. Isout_activation_
not being created for some reason? – Bill Apr 27 '18 at 16:211
- described here. Here is a related question and here is a related blog post – n1k31t4 Apr 27 '18 at 16:38out_activation_
to confirm it? I would hate to raise an issue when I am missing something. – Bill Apr 27 '18 at 16:57nn.fit()
is called and it is'identity'
. Still your answer to the question is very good! Thanks. – Bill Apr 27 '18 at 18:11