deel.lip.metrics module

This module contains metrics applicable in provable robustness. See https://arxiv.org/abs/2006.06520 and https://arxiv.org/abs/2108.04062 for more information.

class deel.lip.metrics.BinaryProvableAvgRobustness(lip_const=1.0, negative_robustness=False, reduction='auto', name='BinaryProvableAvgRobustness')

Bases: Loss

Compute the average provable robustness radius on the dataset.

\[\mathbb{E}_{x \in D}\left[ \frac{\phi\left(\mathcal{M}_f(x)\right)}{ L_f}\right]\]

\(\mathcal{M}_f(x)\) is a term that: is positive when x is correctly classified and negative otherwise. In both case the value give the robustness radius around x.

In the binary classification setup we have:

\[\mathcal{M}_f(x) = f(x) \text{ if } l=1, -f(x) \text{otherwise}\]

Where \(D\) is the dataset, \(l\) is the correct label for x and \(L_f\) is the lipschitz constant of the network..

When negative_robustness is set to True misclassified elements count as negative robustness (\(\phi\) act as identity function), when set to False, misclassified elements yield a robustness radius of 0 ( \(\phi(x)=relu( x)\) ). The elements are not ignored when computing the mean in both cases.

This metric works for labels both in {1,0} and {1,-1}.

Parameters:
  • lip_const – lipschitz constant of the network

  • reduction – the recution method when training in a multi-gpu / TPU system

  • name – metrics name.

call(y_true, y_pred)
get_config()

Returns the config dictionary for a Loss instance.

class deel.lip.metrics.BinaryProvableRobustAccuracy(epsilon=0.1411764705882353, lip_const=1.0, reduction='auto', name='BinaryProvableRobustAccuracy')

Bases: Loss

The accuracy that can be proved at a given epsilon.

Parameters:
  • epsilon – the metric will return the guaranteed accuracy for the radius epsilon

  • lip_const – lipschitz constant of the network

  • reduction – the recution method when training in a multi-gpu / TPU system

  • name – metrics name.

call(y_true, y_pred)
get_config()

Returns the config dictionary for a Loss instance.

class deel.lip.metrics.CategoricalProvableAvgRobustness(lip_const=1.0, disjoint_neurons=True, negative_robustness=False, reduction='auto', name='CategoricalProvableAvgRobustness')

Bases: Loss

Compute the average provable robustness radius on the dataset.

\[\mathbb{E}_{x \in D}\left[ \frac{\phi\left(\mathcal{M}_f(x)\right)}{ L_f}\right]\]

\(\mathcal{M}_f(x)\) is a term that: is positive when x is correctly classified and negative otherwise. In both case the value give the robustness radius around x.

In the multiclass setup we have:

\[\mathcal{M}_f(x) =f_l(x) - \max_{i \neq l} f_i(x)\]

Where \(D\) is the dataset, \(l\) is the correct label for x and \(L_f\) is the lipschitz constant of the network (\(L = 2 \times \text{lip_const}\) when disjoint_neurons=True, \(L = \sqrt{2} \times \text{lip_const}\) otherwise).

When negative_robustness is set to True misclassified elements count as negative robustness (\(\phi\) act as identity function), when set to False, misclassified elements yield a robustness radius of 0 ( \(\phi(x)=relu( x)\) ). The elements are not ignored when computing the mean in both cases.

This metric works for labels both in {1,0} and {1,-1}.

Parameters:
  • lip_const – lipschitz constant of the network

  • disjoint_neurons – must be set to True is your model ends with a FrobeniusDense layer with disjoint_neurons set to True. Set to False otherwise

  • reduction – the recution method when training in a multi-gpu / TPU system

  • name – metrics name.

call(y_true, y_pred)
get_config()

Returns the config dictionary for a Loss instance.

class deel.lip.metrics.CategoricalProvableRobustAccuracy(epsilon=0.1411764705882353, lip_const=1.0, disjoint_neurons=True, reduction='auto', name='CategoricalProvableRobustAccuracy')

Bases: Loss

The accuracy that can be proved at a given epsilon.

Parameters:
  • epsilon – the metric will return the guaranteed accuracy for the radius epsilon

  • lip_const – lipschitz constant of the network

  • disjoint_neurons – must be set to True if your model ends with a FrobeniusDense layer with disjoint_neurons set to True. Set to False otherwise

  • reduction – the recution method when training in a multi-gpu / TPU system

  • name – metrics name.

call(y_true, y_pred)
get_config()

Returns the config dictionary for a Loss instance.