deel.lip.losses module
This module contains losses used in wasserstein distance estimation. See https://arxiv.org/abs/2006.06520 for more information.
- class deel.lip.losses.CategoricalHinge(min_margin, reduction='auto', name='CategoricalHinge')
Bases:
Loss
Similar to original categorical hinge, but with a settable margin parameter. This implementation is sligthly different from the Keras one.
y_true and y_pred must be of shape (batch_size, # classes). Note that y_true should be one-hot encoded or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
min_margin – positive float, margin parameter.
reduction – reduction of the loss, passed to original loss.
name – name of the loss
- call(y_true, y_pred)
Invokes the Loss instance.
- Parameters:
y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]
y_pred – The predicted values. shape = [batch_size, d0, .. dN]
- Returns:
Loss values with the shape [batch_size, d0, .. dN-1].
- get_config()
Returns the config dictionary for a Loss instance.
- class deel.lip.losses.HKR(alpha, min_margin=1.0, multi_gpu=False, reduction='auto', name='HKR')
Bases:
Loss
Wasserstein loss with a regularization parameter based on the hinge margin loss.
\[\inf_{f \in Lip_1(\Omega)} \underset{\textbf{x} \sim P_-}{\mathbb{E}} \left[f(\textbf{x} )\right] - \underset{\textbf{x} \sim P_+} {\mathbb{E}} \left[f(\textbf{x} )\right] + \alpha \underset{\textbf{x}}{\mathbb{E}} \left(\text{min_margin} -Yf(\textbf{x})\right)_+\]Note that y_true and y_pred must be of rank 2: (batch_size, 1) or (batch_size, C) for multilabel classification (with C categories). y_true accepts label values in (0, 1), (-1, 1), or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.Using a multi-GPU/TPU strategy requires to set multi_gpu to True and to pre-process the labels y_true with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
alpha – regularization factor
min_margin – minimal margin ( see hinge_margin_loss ) Kantorovich-Rubinstein term of the loss. In order to be consistent between hinge and KR, the first label must yield the positive class while the second yields negative class.
multi_gpu (bool) – set to True when running on multi-GPU/TPU
reduction – passed to tf.keras.Loss constructor
name – passed to tf.keras.Loss constructor
- call(y_true, y_pred)
Invokes the Loss instance.
- Parameters:
y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]
y_pred – The predicted values. shape = [batch_size, d0, .. dN]
- Returns:
Loss values with the shape [batch_size, d0, .. dN-1].
- get_config()
Returns the config dictionary for a Loss instance.
- hkr(y_true, y_pred)
- class deel.lip.losses.HingeMargin(min_margin=1.0, reduction='auto', name='HingeMargin')
Bases:
Loss
Compute the hinge margin loss.
\[\underset{\textbf{x}}{\mathbb{E}} \left(\text{min_margin} -Yf(\textbf{x})\right)_+\]Note that y_true and y_pred must be of rank 2: (batch_size, 1) or (batch_size, C) for multilabel classification (with C categories). y_true accepts label values in (0, 1), (-1, 1), or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
min_margin – positive float, margin to enforce.
reduction – passed to tf.keras.Loss constructor
name – passed to tf.keras.Loss constructor
- call(y_true, y_pred)
- get_config()
Returns the config dictionary for a Loss instance.
- class deel.lip.losses.KR(multi_gpu=False, reduction='auto', name='KR')
Bases:
Loss
Loss to estimate Wasserstein-1 distance using Kantorovich-Rubinstein duality. The Kantorovich-Rubinstein duality is formulated as following:
\[W_1(\mu, \nu) = \sup_{f \in Lip_1(\Omega)} \underset{\textbf{x} \sim \mu}{\mathbb{E}} \left[f(\textbf{x} )\right] - \underset{\textbf{x} \sim \nu}{\mathbb{E}} \left[f(\textbf{x} )\right]\]Where mu and nu stands for the two distributions, the distribution where the label is 1 and the rest.
Note that y_true and y_pred must be of rank 2: (batch_size, 1) or (batch_size, C) for multilabel classification (with C categories). y_true accepts label values in (0, 1), (-1, 1), or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.Using a multi-GPU/TPU strategy requires to set multi_gpu to True and to pre-process the labels y_true with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
multi_gpu (bool) – set to True when running on multi-GPU/TPU
reduction – passed to tf.keras.Loss constructor
name – passed to tf.keras.Loss constructor
- call(y_true, y_pred)
- get_config()
Returns the config dictionary for a Loss instance.
- class deel.lip.losses.MultiMargin(min_margin=1.0, reduction='auto', name='MultiMargin')
Bases:
Loss
Compute the hinge margin loss for multiclass (equivalent to Pytorch multi_margin_loss)
Note that y_true should be one-hot encoded or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
min_margin – positive float, margin to enforce.
reduction – passed to tf.keras.Loss constructor
name – passed to tf.keras.Loss constructor
- call(y_true, y_pred)
- get_config()
Returns the config dictionary for a Loss instance.
- class deel.lip.losses.MulticlassHKR(alpha=10.0, min_margin=1.0, multi_gpu=False, reduction='auto', name='MulticlassHKR')
Bases:
Loss
The multiclass version of HKR. This is done by computing the HKR term over each class and averaging the results.
Note that y_true should be one-hot encoded or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.Using a multi-GPU/TPU strategy requires to set multi_gpu to True and to pre-process the labels y_true with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
alpha – regularization factor
min_margin – positive float, margin to enforce.
multi_gpu (bool) – set to True when running on multi-GPU/TPU
reduction – passed to tf.keras.Loss constructor
name – passed to tf.keras.Loss constructor
- call(y_true, y_pred)
Invokes the Loss instance.
- Parameters:
y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]
y_pred – The predicted values. shape = [batch_size, d0, .. dN]
- Returns:
Loss values with the shape [batch_size, d0, .. dN-1].
- get_config()
Returns the config dictionary for a Loss instance.
- hkr(y_true, y_pred)
- class deel.lip.losses.MulticlassHinge(min_margin=1.0, reduction='auto', name='MulticlassHinge')
Bases:
Loss
Loss to estimate the Hinge loss in a multiclass setup. It computes the element-wise Hinge term. Note that this formulation differs from the one commonly found in tensorflow/pytorch (which maximises the difference between the two largest logits). This formulation is consistent with the binary classification loss used in a multiclass fashion.
Note that y_true should be one-hot encoded or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
min_margin – positive float, margin to enforce.
reduction – passed to tf.keras.Loss constructor
name – passed to tf.keras.Loss constructor
- call(y_true, y_pred)
- get_config()
Returns the config dictionary for a Loss instance.
- class deel.lip.losses.MulticlassKR(multi_gpu=False, reduction='auto', name='MulticlassKR')
Bases:
Loss
Loss to estimate average of Wasserstein-1 distance using Kantorovich-Rubinstein duality over outputs. In this multiclass setup, the KR term is computed for each class and then averaged.
Note that y_true should be one-hot encoded or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.Using a multi-GPU/TPU strategy requires to set multi_gpu to True and to pre-process the labels y_true with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
multi_gpu (bool) – set to True when running on multi-GPU/TPU
reduction – passed to tf.keras.Loss constructor
name – passed to tf.keras.Loss constructor
- call(y_true, y_pred)
- get_config()
Returns the config dictionary for a Loss instance.
- class deel.lip.losses.TauCategoricalCrossentropy(tau, reduction='auto', name='TauCategoricalCrossentropy')
Bases:
Loss
Similar to original categorical crossentropy, but with a settable temperature parameter.
- Parameters:
tau – temperature parameter.
reduction – reduction of the loss, passed to original loss.
name – name of the loss
- call(y_true, y_pred, *args, **kwargs)
Invokes the Loss instance.
- Parameters:
y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]
y_pred – The predicted values. shape = [batch_size, d0, .. dN]
- Returns:
Loss values with the shape [batch_size, d0, .. dN-1].
- get_config()
Returns the config dictionary for a Loss instance.
- deel.lip.losses.hinge_margin(y_true, y_pred, min_margin)
Compute the element-wise binary hinge margin loss.
Note that y_true and y_pred must be of rank 2: (batch_size, 1) or (batch_size, C) for multilabel classification (with C categories). y_true accepts label values in (0, 1), (-1, 1), or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
min_margin – positive float, margin to enforce.
- Returns:
Element-wise hinge margin loss value.
- deel.lip.losses.multiclass_hinge(y_true, y_pred, min_margin)
Compute the multi-class hinge margin loss.
y_true and y_pred must be of shape (batch_size, # classes). Note that y_true should be one-hot encoded or pre-processed with the
deel.lip.utils.process_labels_for_multi_gpu()
function.- Parameters:
y_true – tensor of true targets of shape (batch_size, # classes)
y_pred – tensor of predicted targets of shape (batch_size, # classes)
min_margin – positive float, margin to enforce.
- Returns:
Element-wise multi-class hinge margin loss value.