-柚子皮-
nn.L1Loss
Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy .
nn.MSELoss
Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xx and target yy .
nn.CrossEntropyLoss
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
nn.CTCLoss
The Connectionist Temporal Classification loss.
nn.NLLLoss
The negative log likelihood loss.
nn.PoissonNLLLoss
Negative log likelihood loss with Poisson distribution of target.
nn.KLDivLoss
The Kullback-Leibler divergence loss measure
nn.BCELoss
Creates a criterion that measures the Binary Cross Entropy between the target and the output:
nn.BCEWithLogitsLoss
This loss combines a Sigmoid layer and the BCELoss in one single class.
nn.MarginRankingLoss
Creates a criterion that measures the loss given inputs x1x1 , x2x2 , two 1D mini-batch Tensors, and a label 1D mini-batch tensor yy (containing 1 or -1).
nn.HingeEmbeddingLoss
Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1).
nn.MultiLabelMarginLoss
Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 2D Tensor of target class indices).
nn.SmoothL1Loss
Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise.
nn.SoftMarginLoss
Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1).
nn.MultiLabelSoftMarginLoss
Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xx and target yy of size (N, C)(N,C) .
nn.CosineEmbeddingLoss
Creates a criterion that measures the loss given input tensors x_1x1 , x_2x2 and a Tensor label yy with values 1 or -1.
nn.MultiMarginLoss
Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 1D tensor of target class indices, 0 \leq y \leq \text{x.size}(1)-10≤y≤x.size(1)−1 ):
nn.TripletMarginLoss
Creates a criterion that measures the triplet loss given an input tensors x1x1 , x2x2 , x3x3 and a margin with a value greater than 00 .
nn.TripletMarginWithDistanceLoss
Creates a criterion that measures the triplet loss given input tensors aa , pp , and nn (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (“distance function”) used to compute the relationship between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”).
参数
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
[CROSSENTROPYLOSS]
[pytorch loss function 总结]
示例:
import torch.nn.functional as F
labels = dataloader["label"] predictions = outputs.squeeze().contiguous()
loss = F.binary_cross_entropy(predictions, labels, reduction='mean')
from: -柚子皮-
ref: [https://pytorch.org/docs/stable/nn.html#loss-functions]