VariationalAutoEncoder#
- class deeplay.applications.autoencoders.vae.VariationalAutoEncoder(*args, **kwargs)#
Bases:
ApplicationMethods Summary
compute_loss(y_hat, y, mu, log_var)decode(z)encode(x)forward(x)Same as
torch.nn.Module.forward().reparameterize(mu, log_var)training_step(batch, batch_idx)Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
Methods Documentation
- compute_loss(y_hat, y, mu, log_var)#
- decode(z)#
- encode(x)#
- forward(x)#
Same as
torch.nn.Module.forward().
- reparameterize(mu, log_var)#
- training_step(batch, batch_idx)#
Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
- Args:
batch: The output of your data iterable, normally a
DataLoader. batch_idx: The index of this batch. dataloader_idx: The index of the dataloader that produced this batch.(only if multiple dataloaders used)
- Return:
Tensor- The loss tensordict- A dictionary which can include any keys, but must include the key'loss'in the case of automatic optimization.None- In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.
In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss
To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:
def __init__(self): super().__init__() self.automatic_optimization = False # Multiple optimizers (e.g.: GANs) def training_step(self, batch, batch_idx): opt1, opt2 = self.optimizers() # do training_step with encoder ... opt1.step() # do training_step with decoder ... opt2.step()
- Note:
When
accumulate_grad_batches> 1, the loss returned here will be automatically normalized byaccumulate_grad_batchesinternally.