UniformStrategy#

class deeplay.activelearning.strategies.uniform.UniformStrategy(*args, **kwargs)#

Bases: Strategy

Methods Summary

forward(x)

Same as torch.nn.Module.forward().

query_strategy(pool, n)

Implement the query strategy here.

training_step(batch, batch_idx)

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Methods Documentation

forward(x)#

Same as torch.nn.Module.forward().

Args:

*args: Whatever you decide to pass into the forward method. **kwargs: Keyword arguments are also possible.

Return:

Your model’s output

query_strategy(pool, n)#

Implement the query strategy here.

training_step(batch, batch_idx)#

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Args:

batch: The output of your data iterable, normally a DataLoader. batch_idx: The index of this batch. dataloader_idx: The index of the dataloader that produced this batch.

(only if multiple dataloaders used)

Return:
  • Tensor - The loss tensor

  • dict - A dictionary which can include any keys, but must include the key 'loss' in the case of automatic optimization.

  • None - In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:

def __init__(self):
    super().__init__()
    self.automatic_optimization = False


# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx):
    opt1, opt2 = self.optimizers()

    # do training_step with encoder
    ...
    opt1.step()
    # do training_step with decoder
    ...
    opt2.step()
Note:

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.