So it seems that you would like to compute the specificity of your model:
$${\displaystyle \mathrm {TNR} ={\frac {\mathrm {TN} }{N}}={\frac {\mathrm {TN} }{\mathrm {TN} +\mathrm {FP} }}}$$
So you would require a function that can take your predictions, compute the number of true negative and false positive, then compute the specifictiy using the equation above. The body of this function is borrowed from here and simply modified for two classes.
import numpy as np
import keras.backend as K
def compute_binary_specificity(y_pred, y_true):
"""Compute the confusion matrix for a set of predictions.
Parameters
----------
y_pred : predicted values for a batch if samples (must be binary: 0 or 1)
y_true : correct values for the set of samples used (must be binary: 0 or 1)
Returns
-------
out : the specificity
"""
check_binary(K.eval(y_true)) # must check that input values are 0 or 1
check_binary(K.eval(y_pred)) #
TN = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 0)
FP = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 1)
# as Keras Tensors
TN = K.sum(K.variable(TN))
FP = K.sum(K.variable(FP))
specificity = TN / (TN + FP + K.epsilon())
return specificity
Edit: this function gives results equivalent to a numpy version of the function and is tested to work for 2d, 3d, 4d and 5d arrays.
As the link you added suggests, you must also create a wrapper function to use this custom function as a loss function in Keras:
def specificity_loss_wrapper():
"""A wrapper to create and return a function which computes the specificity loss, as (1 - specificity)
"""
# Define the function for your loss
def specificity_loss(y_true, y_pred):
return 1.0 - compute_binary_specificity(y_true, y_pred)
return specificity_loss # we return this function object
Note the the specificity loss is returned from the wrapper function as $1 - specificity$. This could have been performed in the first function too - it should matter, I just separated the computation of specificity from that off the loss.
This can then be used like this:
# Create a Keras model object as usual
model = my_model()
# ... (add layers etc)
# Create the loss function object using the wrapper function above
spec_loss = specificity_loss_wrapper()
# compile model using the return los function object
model.compile(loss=spec_loss)
# ... train model as usual
Additionally, you could try importing Tensorflow itself and use its built-in tf.confusion_matrix operation.
conf_mat
a Keras variable tensor. – n1k31t4 Jun 25 '18 at 10:47.value
attibute which should return the actual number (perhaps you can't iterate on the returnedDimension
object in all python/keras versions). I tested on empty arrays as well, but still can't reproduce your error. I spotted a typo in my code, so have fixed it by simplifying thecompute_specificity
function. – n1k31t4 Jun 25 '18 at 11:56