Groups
Category
Level
Multi-task loss balancing aims to automatically set each task’s weight so that no single loss dominates training.
0-1 loss directly measures classification error but is discontinuous and non-convex, making optimization computationally hard.
Knowledge distillation loss blends standard hard-label cross-entropy with a soft distribution match from a teacher using a temperature parameter.
Perceptual loss compares images in a deep network's feature space rather than raw pixels, which aligns better with human judgment of similarity.
Triplet loss and contrastive loss are metric-learning objectives that teach a model to map similar items close together and dissimilar items far apart in an embedding space.
Focal Loss reshapes cross-entropy so that hard, misclassified examples get more focus while easy, well-classified ones are down-weighted.
Huber loss behaves like mean squared error (quadratic) for small residuals and like mean absolute error (linear) for large residuals, making it both stable and robust.
Cross-entropy loss measures how well predicted probabilities match the true labels by penalizing confident wrong predictions heavily.