deel.lip.callbacks module

This module contains callbacks that can be added to keras training process.

class deel.lip.callbacks.CondenseCallback(on_epoch: bool = True, on_batch: bool = False)

Bases: Callback

Automatically condense layers of a model on batches/epochs. Condensing a layer consists in overwriting the kernel with the constrained weights. This prevents the explosion/vanishing of values inside the original kernel.

Warning

Overwriting the kernel may disturb the optimizer, especially if it has a non-zero momentum.

Parameters:
  • on_epoch – if True apply the constraint between epochs

  • on_batch – if True apply constraints between batches

get_config()
on_epoch_end(epoch: int, logs: Optional[Dict[str, float]] = None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_batch_end(batch: int, logs: Optional[Dict[str, float]] = None)

Called at the end of a training batch in fit methods.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Parameters:
  • batch – Integer, index of batch within the current epoch.

  • logs – Dict. Aggregated metric results up until this batch.

class deel.lip.callbacks.LossParamLog(param_name, rate=1)

Bases: Callback

Logger to print values of a loss parameter at each epoch.

Parameters:
  • param_name (str) – name of the parameter of the loss to log.

  • rate (int) – logging rate (in epochs)

get_config()
on_epoch_end(epoch: int, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

class deel.lip.callbacks.LossParamScheduler(param_name, fp, xp, step=0)

Bases: Callback

Scheduler to modify a loss parameter during training. It uses a linear interpolation (defined by fp and xp) depending on the optimization step.

Parameters:
  • param_name (str) – name of the parameter of the loss to tune. Must be a tf.Variable.

  • fp (list) – values of the loss parameter as steps given by the xp.

  • xp (list) – step where the parameter equals fp.

  • step – step value, for serialization/deserialization purposes.

get_config()
on_train_batch_begin(batch: int, logs=None)

Called at the beginning of a training batch in fit methods.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Parameters:
  • batch – Integer, index of batch within the current epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

class deel.lip.callbacks.MonitorCallback(monitored_layers: Iterable[str], logdir: str, target: str = 'kernel', what: str = 'max', on_epoch: bool = True, on_batch: bool = False)

Bases: Callback

Allow to monitor the singular values of specified layers during training. This analyze the singular values of the original kernel (before reparametrization). Two modes can be chosen: “max” plots the largest singular value over training, while “all” plots the distribution of the singular values over training (series of distribution).

Parameters:
  • monitored_layers – list of layer name to monitor.

  • logdir – path to the logging directory.

  • target – describe what to monitor, can either “kernel” or “wbar”. Setting to “kernel” check values of the unconstrained weights while setting to “wbar” check values of the constrained weights (allowing to check if the parameters are correct to ensure lipschitz constraint)

  • what – either “max”, which display the largest singular value over the training process, or “all”, which plot the distribution of all singular values.

  • on_epoch – if True apply the constraint between epochs.

  • on_batch – if True apply constraints between batches.

get_config()
on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_batch_end(batch, logs=None)

Called at the end of a training batch in fit methods.

Subclasses should override for any actions to run.

Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches.

Parameters:
  • batch – Integer, index of batch within the current epoch.

  • logs – Dict. Aggregated metric results up until this batch.