deel.lip.activations module

This module contains extra activation functions which respect the Lipschitz constant. It can be added as a layer, or it can be used in the “activation” params for other layers.

class deel.lip.activations.FullSort(*args, **kwargs)

Bases: deel.lip.activations.GroupSort

FullSort activation. Special case of GroupSort where the entire input is sorted.

Input shape:

Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.

Output shape:

Same size as input.

class deel.lip.activations.GroupSort(*args, **kwargs)

Bases: tensorflow.python.keras.engine.base_layer.Layer, deel.lip.layers.LipschitzLayer

GroupSort activation

Parameters
  • n – group size used when sorting. When None group size is set to input size (fullSort behavior)

  • data_format – either channels_first or channels_last

  • k_coef_lip – the lipschitz coefficient to be enforced

  • *args – params passed to Layers

  • **kwargs – params passed to layers (named fashion)

Input shape:

Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.

Output shape:

Same size as input.

build(input_shape)

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x, **kwargs)

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

compute_output_shape(input_shape)

Computes the output shape of the layer.

If the layer has not been built, this method will call build on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.

Parameters

input_shape – Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns

An input shape tuple.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

class deel.lip.activations.GroupSort2(*args, **kwargs)

Bases: deel.lip.activations.GroupSort

GroupSort2 activation. Special case of GroupSort with group of size 2.

Input shape:

Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.

Output shape:

Same size as input.

class deel.lip.activations.MaxMin(*args, **kwargs)

Bases: tensorflow.python.keras.engine.base_layer.Layer, deel.lip.layers.LipschitzLayer

MaxMin activation [Relu(x),reLU(-x)]

Parameters
  • data_format – either channels_first or channels_last

  • k_coef_lip – the lipschitz coefficient to be enforced

  • *args – params passed to Layers

  • **kwargs – params passed to layers (named fashion)

Input shape:

Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.

Output shape:

Double channel size as input.

References

([M. Blot, M. Cord, et N. Thome, « Max-min convolutional neural networks for image classification », in 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 2016, p. 3678‑3682.)

build(input_shape)

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x, **kwargs)

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

compute_output_shape(input_shape)

Computes the output shape of the layer.

If the layer has not been built, this method will call build on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.

Parameters

input_shape – Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns

An input shape tuple.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

deel.lip.activations.PReLUlip(k_coef_lip=1.0)

PreLu activation, with Lipschitz constraint.

Parameters

k_coef_lip – lipschitz coefficient to be enforced