65

I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution.

Here's my actual code:

# Split dataset in train and test data 
X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)

# Build the model
model = Sequential()
model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))

time_callback = TimeHistory()

# Fit the model
history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback]) 

And then I am predicting on new test data, and getting the confusion matrix like this:

y_pred = model.predict(X_test)
y_pred =(y_pred>0.5)
list(y_pred)

cm = confusion_matrix(Y_test, y_pred)
print(cm)

But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer)

Thank you for any help!

ZelelB
  • 1,057
  • 2
  • 11
  • 14

5 Answers5

64

Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.

However, if you really need them, you can do it like this

from keras import backend as K

def recall_m(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    return recall

def precision_m(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision

def f1_m(y_true, y_pred):
    precision = precision_m(y_true, y_pred)
    recall = recall_m(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))

# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])

# fit the model
history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)

# evaluate the model
loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)
TQA
  • 536
  • 2
  • 14
Tasos
  • 3,920
  • 4
  • 23
  • 54
  • 4
    if they can be misleading, how to evaluate a Keras' model then? – ZelelB Feb 06 '19 at 13:52
  • 2
    Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually. – Tasos Feb 06 '19 at 14:03
  • Any idea why this is not working on validation for me? works fine for training. – Rodrigo Ruiz Jan 12 '20 at 07:07
  • 1
    Is there a reason why I get recall values higher than 1? – Panathinaikos Mar 29 '20 at 10:02
  • recall and precision going higher than 1 for categorical classification – rsd96 May 08 '20 at 12:25
  • 6
    @Panathinaikos these functions work right only for binary classification. – Zeeshan Ali Aug 27 '20 at 11:40
  • 1
    Doesn't work well for a 3-class classification problem. Precision is always 0, and f1 score starts above 1.0 and goes down over time. – Eli Halych Nov 08 '22 at 16:09
  • It does work for multiclass problems if you one-hot-encode the output and use categorical_crossentropy as loss function and softmax as activation function of last layer. But it does not work if you don't provide the output in one-hot-encode format and try to use sparse_categorical_crossentropy as loss. In this last case, precision, recall and f1 are always higher than 1. – Murilo Aug 17 '23 at 14:37
30

You could use the scikit-learn classification report. To convert your labels into a numerical or binary format take a look at the scikit-learn label encoder.

from sklearn.metrics import classification_report

y_pred = model.predict(x_test, batch_size=64, verbose=1)
y_pred_bool = np.argmax(y_pred, axis=1)

print(classification_report(y_test, y_pred_bool))

which gives you (output copied from the scikit-learn example):

             precision  recall   f1-score    support

 class 0       0.50      1.00      0.67         1
 class 1       0.00      0.00      0.00         1
 class 2       1.00      0.67      0.80         3
matze
  • 401
  • 3
  • 3
6

You can also try as mentioned below.

from sklearn.metrics import f1_score, precision_score, recall_score, confusion_matrix
y_pred1 = model.predict(X_test)
y_pred = np.argmax(y_pred1, axis=1)

# Print f1, precision, and recall scores
print(precision_score(y_test, y_pred , average="macro"))
print(recall_score(y_test, y_pred , average="macro"))
print(f1_score(y_test, y_pred , average="macro"))
4

See the docs of keras

import tensorflow as tf

model.compile( ..., metrics=[tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])])

Justin Lange
  • 141
  • 3
-1

Try this with Y_test, y_pred as parameters.

Zephyr
  • 997
  • 4
  • 10
  • 20
  • I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support' – ZelelB Feb 06 '19 at 13:51
  • You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support) – Viacheslav Komisarenko Feb 06 '19 at 13:59