deel.lip.losses module

This module contains losses used in wasserstein distance estimation. See https://arxiv.org/abs/2006.06520 for more information.

deel.lip.losses.HKR_loss(alpha, min_margin=1)

Wasserstein loss with a regularization param based on hinge loss.

\[\inf_{f \in Lip_1(\Omega)} \underset{\textbf{x} \sim P_-}{\mathbb{E}} \left[f(\textbf{x} )\right] - \underset{\textbf{x} \sim P_+} {\mathbb{E}} \left[f(\textbf{x} )\right] + \alpha \underset{\textbf{x}}{\mathbb{E}} \left(\text{min_margin} -Yf(\textbf{x})\right)_+\]
Parameters
  • alpha – regularization factor

  • min_margin – minimal margin ( see hinge_margin_loss )

  • term of the loss. In order to be consistent (Kantorovich-rubinstein) –

  • hinge and KR (between) –

  • first label must yield the positve class (the) –

  • the second yields negative class. (while) –

Returns

a function that compute the regularized Wasserstein loss

deel.lip.losses.HKR_multiclass_loss(alpha=0.0, min_margin=1)
The multiclass version of HKR. This is done by computing the HKR term

over each class and averaging the results.

Parameters
  • alpha – regularization factor

  • min_margin – minimal margin ( see Hinge_multiclass_loss )

Note

y_true has to be one hot encoded.

Returns

Callable, the function to compute HKR loss

deel.lip.losses.Hinge_multiclass_loss(min_margin=1)

Loss to estimate the Hinge loss in a multiclass setup. It compute the elementwise hinge term. Note that this formulation differs from the one commonly found in tensorflow/pytorch (with marximise the difference between the two largest logits). This formulation is consistent with the binary cassification loss used in a multiclass fashion.

Note

y_true should be one hot encoded. labels in (1,0)

Returns

Callable, the function to compute multiclass Hinge loss

deel.lip.losses.KR_loss()

Loss to estimate wasserstein-1 distance using Kantorovich-Rubinstein duality. The Kantorovich-Rubinstein duality is formulated as following:

\[W_1(\mu, \nu) = \sup_{f \in Lip_1(\Omega)} \underset{\textbf{x} \sim \mu}{\mathbb{E}} \left[f(\textbf{x} )\right] - \underset{\textbf{x} \sim \nu}{\mathbb{E}} \left[f(\textbf{x} )\right]\]

Where mu and nu stands for the two distributions, the distribution where the label is 1 and the rest.

Returns

Callable, the function to compute Wasserstein loss

deel.lip.losses.KR_multiclass_loss()

Loss to estimate average of W1 distance using Kantorovich-Rubinstein duality over outputs. In this multiclass setup thr KR term is computed for each class and then averaged.

Note

y_true has to be one hot encoded (labels being 1s and 0s ).

Returns

Callable, the function to compute Wasserstein multiclass loss.

deel.lip.losses.MultiMarginLoss(min_margin=1)
Compute the mean hinge margin loss for multi class (equivalent to Pytorch

multi_margin_loss)

Parameters
  • min_margin – the minimal margin to enforce.

  • has to be to_categorical (y_true) –

Returns

Callable, the function to compute multi margin loss

deel.lip.losses.hinge_margin_loss(min_margin=1)

Compute the hinge margin loss.

\[\underset{\textbf{x}}{\mathbb{E}} \left(\text{min_margin} -Yf(\textbf{x})\right)_+\]
Parameters

min_margin – the minimal margin to enforce.

Returns

a function that compute the hinge loss.

deel.lip.losses.neg_KR_loss()

Loss to compute the negative wasserstein-1 distance using Kantorovich-Rubinstein duality. This allows the maximisation of the term using conventional optimizer.

Returns

Callable, the function to compute negative Wasserstein loss