πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
πŸ“Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

🎬AI Lectures40

πŸ“šAllπŸ“LLM🎯PromptsπŸ”RAG🀝Agents🧠Deep LearningπŸ’¬NLPπŸ€–MLπŸ“–Basics
Difficulty:
AllBeginnerIntermediateAdvanced
Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 2: Pytorch, Resource AccountingDeep Learning

Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 2: Pytorch, Resource Accounting

Beginner
Stanford Online

This session teaches two essentials for building language models: PyTorch basics and resource accounting. PyTorch is a library for working with tensors (multi‑dimensional arrays) and can run on CPU or GPU. You learn how to create tensors, perform math (including matrix multiplies), reshape, index/slice, and use automatic differentiation to compute gradients for training.

#pytorch#tensor#autograd
1234
Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 3: Architectures, HyperparametersLLM

Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 3: Architectures, Hyperparameters

Beginner
Stanford Online

Language modeling means predicting the next token (a token is a small piece of text like a word or subword) given all tokens before it. If you can estimate this next-token probability well, you can generate text by sampling one token at a time and appending it to the history. This step-by-step sampling turns probabilities into full sentences or paragraphs. Good models make these probabilities sharp for likely words and low for unlikely ones.

#language modeling#next-token prediction#embedding
Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 1: Overview and TokenizationNLP

Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 1: Overview and Tokenization

Beginner
Stanford Online

This session introduces a brand-new course on building language models from scratch. You learn what language modeling is, where it’s used (speech recognition, translation, text generation, classification), and how different modeling families work. The class emphasizes implementing models yourself in Python and PyTorch, plus how to train and evaluate them.

#language modeling#tokenization#n-gram
Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 5: GPUsDeep Learning

Stanford CS336 Language Modeling from Scratch | Spring 2025 | Lecture 5: GPUs

Beginner
Stanford Online

GPUs (Graphics Processing Units) are critical for deep learning because they run thousands of simple math operations at the same time. Language models like Transformers rely on huge numbers of matrix multiplications, which are perfect for parallel processing. CPUs have a few strong cores for complex, step-by-step tasks, while GPUs have many simpler cores for doing lots of math in parallel. Using GPUs correctly can make training and inference dramatically faster.

#gpu#cuda#pytorch
Chapter 1: Vectors, what even are they? | Essence of Linear AlgebraBasics

Chapter 1: Vectors, what even are they? | Essence of Linear Algebra

Beginner
3Blue1Brown Korean

This lesson explains what a vector really is from three connected views: a directed arrow in space, a coordinate like (4, 3), and a list of numbers like [4, 3]. Thinking of vectors as arrows makes direction and length feel natural, while coordinates make calculation easy. Both are the same thing described in different ways. You can move an arrow anywhere without changing the vector, as long as its direction and length stay the same.

#vector#arrow representation#components
Chapter 12: A geometric interpretation of Cramer's rule | Essence of Linear AlgebraBasics

Chapter 12: A geometric interpretation of Cramer's rule | Essence of Linear Algebra

Beginner
3Blue1Brown Korean

This lesson explains Cramer's rule using geometry. A linear system like 2x + 5y = 7 and βˆ’3x + 4y = βˆ’1 can be written as Ax = v, where A’s columns are two vectors a1 and a2. The solution (x, y) tells how much to stretch and add a1 and a2 to land exactly on v.

#cramer's rule#determinant#parallelogram area
Chapter 6: The determinant | Essence of Linear AlgebraBasics

Chapter 6: The determinant | Essence of Linear Algebra

Beginner
3Blue1Brown Korean

The determinant is a single number attached to every square matrix that tells how a linear transformation scales area in 2D or volume in 3D. Its absolute value is the scale factor, and its sign tells whether the transformation keeps orientation or flips it. Think of dropping a 1-by-1 square (or 1-by-1-by-1 cube) into the transformation and measuring what size it becomes.

#determinant#linear transformation#area scaling
Chapter 5: Three-dimensional linear transformations | Essence of Linear AlgebraBasics

Chapter 5: Three-dimensional linear transformations | Essence of Linear Algebra

Beginner
3Blue1Brown Korean

Three-dimensional linear transformations change the whole 3D space while keeping the origin fixed and all grid lines straight and parallel. This is just like in 2D, but now there are three axes: x, y, and z. These transformations stretch, rotate, shear (slant), or reflect the space without bending or curving it.

#3d linear transformation#basis vectors#i-hat j-hat k-hat
Chapter 9: Dot products and duality | Essence of Linear AlgebraBasics

Chapter 9: Dot products and duality | Essence of Linear Algebra

Beginner
3Blue1Brown Korean

Dot product (also called inner product) takes two vectors and gives one number by multiplying matching coordinates and adding them. For example, with V=(3,2) and W=(4,-1), the dot product is 3*4 + 2*(-1) = 10. This single number is not just arithmetic; it measures how much V goes in the direction of W. If V points with W, the number is positive; if against, it’s negative; if perpendicular, it’s near zero.

#dot product#inner product#projection
Chapter 8: Nonsquare matrices as transformations between dimensions | Essence of Linear AlgebraBasics

Chapter 8: Nonsquare matrices as transformations between dimensions | Essence of Linear Algebra

Beginner
3Blue1Brown Korean

A matrix with different numbers of rows and columns models a transformation between spaces of different sizes. For example, a 3-by-2 matrix takes 2D vectors from the flat plane and turns them into 3D vectors in space. The columns of the matrix tell you exactly where the basic 2D directions (i-hat and j-hat) end up in 3D. Using this rule, any 2D input can be mapped by combining those columns.

#non-square matrix#rectangular matrix#linear transformation
Chapter 7: Inverse matrices, column space, and null space | Essence of Linear AlgebraBasics

Chapter 7: Inverse matrices, column space, and null space | Essence of Linear Algebra

Beginner
3Blue1Brown Korean

Matrices represent linear transformations, which are rules that stretch, rotate, shear, or squash space while keeping straight lines straight and the origin fixed. When you multiply matrices, you are chaining these transformations: first do one change to space, then do the next. Some transformations lose information by collapsing dimensions, like flattening a whole plane onto a line, and those cannot be undone.

#inverse matrix#identity matrix#determinant
Why visual understanding of linear algebra matters firstBasics

Why visual understanding of linear algebra matters first

Beginner
3Blue1Brown Korean

This lesson builds an intuitive, picture-first understanding of eigenvalues and eigenvectors. Instead of starting with heavy equations, it treats a matrix as a machine that reshapes the whole 2D plane and then looks for special directions that do not turn. These special directions are eigenvectors, and the stretch or shrink amount along them is the eigenvalue. You will see why some vectors change both length and direction, while a few special ones only change length.

#eigenvalue#eigenvector#linear transformation