🎓How I Study AIHISA
đź“–Read
📄Papers📰Blogs🎬Courses
đź’ˇLearn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

🎬AI Lectures28

📚All📝LLM🎯Prompts🔍RAG🤝Agents🧠Deep Learning💬NLP🤖ML📖Basics
Difficulty:
AllBeginnerIntermediateAdvanced
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 5 - LLM tuningML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 5 - LLM tuning

Beginner
Stanford

Regularization is a method to prevent overfitting by adding a penalty for model complexity. Overfitting happens when a model memorizes training data, including noise, and performs poorly on new data. By discouraging overly complex patterns, regularization helps the model generalize better.

#regularization#ridge regression#lasso
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 4 - LLM Training
123
Deep Learning

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 4 - LLM Training

Beginner
Stanford

This lecture explains how we train neural networks by minimizing a loss function using optimization methods. It starts with gradient descent and stochastic gradient descent (SGD), showing how we update parameters by stepping opposite to the gradient. Mini-batches make training faster and add helpful noise that can escape bad spots in the loss landscape called local minima.

#gradient descent#stochastic gradient descent#mini-batch
Stanford CS230 | Autumn 2025 | Lecture 8: Agents, Prompts, and RAGBasics

Stanford CS230 | Autumn 2025 | Lecture 8: Agents, Prompts, and RAG

Beginner
Stanford

This session sets up course logistics and introduces core machine learning ideas. You learn when and how class meets, where to find materials, how grading works, and why MATLAB is used. It also sets expectations: the course is challenging, homeworks are crucial, and live attendance is encouraged.

#machine learning#supervised learning#unsupervised learning
Stanford CS230 | Autumn 2025 | Lecture 3: Full Cycle of a DL projectML

Stanford CS230 | Autumn 2025 | Lecture 3: Full Cycle of a DL project

Beginner
Stanford

This lecture introduces supervised learning for regression, where the goal is to predict a real number (like house price) from input features (like square footage, bedrooms, and location). You represent each example as a d-dimensional vector x with a target y. Linear regression models this relationship with a straight-line formula: f(x) = w^T x + b. The focus is on learning weights w and bias b that best map inputs to outputs.

#linear regression#supervised learning#mean squared error
Stanford CS230 | Autumn 2025 | Lecture 4: Adversarial Robustness and Generative ModelsBasics

Stanford CS230 | Autumn 2025 | Lecture 4: Adversarial Robustness and Generative Models

Beginner
Stanford

The lecture explains why we use machine learning instead of writing step-by-step rules. Many real problems, like finding cats in photos, are too messy for hand-written rules because there are too many exceptions. With machine learning, we give the computer lots of examples and it discovers patterns on its own. This approach lets computers handle tasks we can’t easily explain in code.

#machine learning#supervised learning#unsupervised learning
Stanford CS230 | Autumn 2025 | Lecture 2: Supervised, Self-Supervised, & Weakly Supervised LearningML

Stanford CS230 | Autumn 2025 | Lecture 2: Supervised, Self-Supervised, & Weakly Supervised Learning

Beginner
Stanford

Decision trees are models that make predictions by asking a series of yes/no questions about features, like a flowchart. You start at a root question, follow branches based on answers, and end at a leaf that gives the prediction. This simple structure makes them easy to read and explain to anyone.

#decision tree#root node#leaf node
Stanford CS230 | Autumn 2025 | Lecture 5: Deep Reinforcement LearningML

Stanford CS230 | Autumn 2025 | Lecture 5: Deep Reinforcement Learning

Beginner
Stanford

Logistic regression is a simple method for binary classification that outputs a probability between 0 and 1 for class 1. It takes a weighted sum of input features (w^T x + b) and passes it through the sigmoid function. The sigmoid is an S-shaped curve that squashes any real number into the [0,1] range.

#logistic regression#binary classification#sigmoid
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 6 - LLM ReasoningML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 6 - LLM Reasoning

Beginner
Stanford

Machine learning lets computers learn patterns from data instead of following hand-made rules. Instead of writing instructions like “pointy ears = cat,” we feed many labeled examples and let the computer discover what features matter. This makes ML flexible and powerful for messy, real-world problems where rules are hard to write. Arthur Samuel’s classic definition captures this: computers learn without being explicitly programmed.

#machine learning#supervised learning#unsupervised learning
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 7 - Agentic LLMsML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 7 - Agentic LLMs

Beginner
Stanford

This lecture explains L1 regularization, also called LASSO, as a way to prevent overfitting by adding a penalty to the loss that depends on the absolute values of model weights. Overfitting means a model memorizes the training data but fails on new data. By penalizing large weights, L1 helps the model focus on the strongest, most useful features.

#l1 regularization#lasso#ridge regression
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 1 - TransformerML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 1 - Transformer

Beginner
Stanford

Decision trees are simple, powerful models that make predictions by asking a sequence of yes/no questions about input features. They work for both classification (like spam vs. not spam) and regression (like house price prediction). The tree’s structure—what to split on and when—comes directly from the data, which is why we call them non-parametric.

#decision tree#gini impurity#entropy
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 9 - Recap & Current TrendsDeep Learning

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 9 - Recap & Current Trends

Beginner
Stanford

The lecture explains what deep learning is and why it changed how we build intelligent systems. In the past, engineers wrote step-by-step rules (like detecting corners and lines) to identify objects in images. These hand-built rules often broke when lighting, angle, or season changed. Deep learning replaces these hand-crafted rules with models that learn directly from data.

#deep learning#neural networks#machine learning
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 3 - Tranformers & Large Language ModelsBasics

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 3 - Tranformers & Large Language Models

Beginner
Stanford

Artificial Intelligence (AI) is the science of making machines do tasks that would need intelligence if a person did them. Today’s AI mostly focuses on specific tasks like recognizing faces or recommending products, which is called narrow AI. A future goal is general AI, which would do any thinking task a human can, but it does not exist yet.

#artificial intelligence#narrow ai#general ai