πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

🎬AI Lectures15

πŸ“šAllπŸ“LLM🎯PromptsπŸ”RAG🀝Agents🧠Deep LearningπŸ’¬NLPπŸ€–MLπŸ“–Basics
Difficulty:
AllBeginnerIntermediateAdvanced
Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 13: Data 1LLM

Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 13: Data 1

Intermediate
Stanford Online

This class explains why data is the most important part of building language models. You learn where text data comes from (books, the web, and human feedback) and what each source is good and bad at. The instructor stresses that most of your time in real projects goes into finding, collecting, cleaning, and filtering data, not model code.

#common crawl#c4#mc4
12
Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 8: Parallelism 2
Deep Learning

Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 8: Parallelism 2

Intermediate
Stanford Online

This session explains how to speed up and scale training when one GPU or a simple setup is not enough. It reviews data parallelism (split data across devices) and pipeline parallelism (split model across devices), then dives into practical fixes for their main bottlenecks. The key tools are gradient accumulation, virtual batch size, and interleaved pipeline stages. You’ll learn the trade‑offs between memory use, communication overhead, and idle time.

#data parallelism#pipeline parallelism#model parallelism
Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 14: Data 2LLM

Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 14: Data 2

Intermediate
Stanford Online

The lecture explains why rare words are a core challenge in language modeling. Most corpora follow Zipf’s law, where a few words appear very often and a huge number appear very rarely. Rare words make probability estimates unreliable and inflate vocabulary size, which increases memory and slows training and inference.

#zipf's law#rare words#unknown token