Groups
0-1 loss directly measures classification error but is discontinuous and non-convex, making optimization computationally hard.
Triplet loss and contrastive loss are metric-learning objectives that teach a model to map similar items close together and dissimilar items far apart in an embedding space.