Groups
Category
Mixed precision training stores and computes tensors in low precision (FP16/BF16) for speed and memory savings while keeping a master copy of weights in FP32 for accurate updates.
Data parallelism splits the training data across workers that compute gradients in parallel on a shared model.