3

NOTE: This question was first posted in Cross Validated Stack Exchange but I was instructed to move it off that website as it was not a good fit.

I am new to the implementation of machine learning, neural networks in python. I am trying to implement patternet in python as it is in MATLAB; more specifically to keep the network configuration settings. Below is what I get when I use paternet(10):

    Neural Network
          name: 'Pattern Recognition Neural Network'
      userdata: (your custom info)

dimensions:

     numInputs: 1
     numLayers: 2
    numOutputs: 1
numInputDelays: 0
numLayerDelays: 0

numFeedbackDelays: 0 numWeightElements: 230 sampleTime: 1

connections:

   biasConnect: [1; 1]
  inputConnect: [1; 0]
  layerConnect: [0 0; 1 0]
 outputConnect: [0 1]

subobjects:

         input: Equivalent to inputs{1}
        output: Equivalent to outputs{2}

        inputs: {1x1 cell array of 1 input}
        layers: {2x1 cell array of 2 layers}
       outputs: {1x2 cell array of 1 output}
        biases: {2x1 cell array of 2 biases}
  inputWeights: {2x1 cell array of 1 weight}
  layerWeights: {2x2 cell array of 1 weight}

functions:

      adaptFcn: 'adaptwb'
    adaptParam: (none)
      derivFcn: 'defaultderiv'
     divideFcn: 'divideind'
   divideParam: .trainInd, .valInd, .testInd
    divideMode: 'sample'
       initFcn: 'initlay'
    performFcn: 'crossentropy'
  performParam: .regularization, .normalization
      plotFcns: {'plotperform', plottrainstate, ploterrhist,
                plotconfusion, plotroc}
    plotParams: {1x5 cell array of 5 params}
      trainFcn: 'trainscg'
    trainParam: .showWindow, .showCommandLine, .show, .epochs,
                .time, .goal, .min_grad, .max_fail, .sigma,
                .lambda

weight and bias values:

            IW: {2x1 cell} containing 1 input weight matrix
            LW: {2x2 cell} containing 1 layer weight matrix
             b: {2x1 cell} containing 2 bias vectors

My intention is not to create the exact net data structure, I would like to mainly implement the activation functions to different layers of my network as is listed above. Let's assume I can initialize the weights (network parameters).

I followed this post Getting Started with Deep Learning and Python as well as Multi-layer Perceptron to start. In the latter link, I learned that I can implement MLP by utilizing MLPclassifier from sklearn.neural_network library:

MLPClassifier(activation='relu', alpha=1e-05, batch_size='auto',
       beta_1=0.9, beta_2=0.999, early_stopping=False,
       epsilon=1e-08, hidden_layer_sizes=(5, 2), learning_rate='constant',
       learning_rate_init=0.001, max_iter=200, momentum=0.9,
       nesterovs_momentum=True, power_t=0.5, random_state=1, shuffle=True,
       solver='lbfgs', tol=0.0001, validation_fraction=0.1, verbose=False,
       warm_start=False)

However, its setting is not exactly the one that I am trying to have. Reading MLPclassifier document page did not help me further.

I would appreciate it if I get to know how to implement it as such? I would be happy to be directed to any post or link which has already discussed this matter.

Shayan Shafiq
  • 1,012
  • 4
  • 12
  • 24
tafteh
  • 41
  • 7

1 Answers1

2

Consider reusing an existing framework and adding the missing activation functions. For example, here you can see how to do that in sklearn.

However, notice that there might be other differences (consider this example) in how the algorithm is implemented, as with the rise of deep learning and convolution networks, neural networks have hugely grown in variety and complexity.

mapto
  • 744
  • 5
  • 16