🎓How I Study AIHISA
đź“–Read
📄Papers📰Blogs🎬Courses
đź’ˇLearn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

🎬AI Lectures15

📚All📝LLM🎯Prompts🔍RAG🤝Agents🧠Deep Learning💬NLP🤖ML📖Basics
Difficulty:
AllBeginnerIntermediateAdvanced
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 5 - LLM tuningML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 5 - LLM tuning

Beginner
Stanford

Regularization is a method to prevent overfitting by adding a penalty for model complexity. Overfitting happens when a model memorizes training data, including noise, and performs poorly on new data. By discouraging overly complex patterns, regularization helps the model generalize better.

#regularization#ridge regression#lasso
Stanford CS230 | Autumn 2025 | Lecture 3: Full Cycle of a DL project
12
ML

Stanford CS230 | Autumn 2025 | Lecture 3: Full Cycle of a DL project

Beginner
Stanford

This lecture introduces supervised learning for regression, where the goal is to predict a real number (like house price) from input features (like square footage, bedrooms, and location). You represent each example as a d-dimensional vector x with a target y. Linear regression models this relationship with a straight-line formula: f(x) = w^T x + b. The focus is on learning weights w and bias b that best map inputs to outputs.

#linear regression#supervised learning#mean squared error
Stanford CS230 | Autumn 2025 | Lecture 2: Supervised, Self-Supervised, & Weakly Supervised LearningML

Stanford CS230 | Autumn 2025 | Lecture 2: Supervised, Self-Supervised, & Weakly Supervised Learning

Beginner
Stanford

Decision trees are models that make predictions by asking a series of yes/no questions about features, like a flowchart. You start at a root question, follow branches based on answers, and end at a leaf that gives the prediction. This simple structure makes them easy to read and explain to anyone.

#decision tree#root node#leaf node
Stanford CS230 | Autumn 2025 | Lecture 5: Deep Reinforcement LearningML

Stanford CS230 | Autumn 2025 | Lecture 5: Deep Reinforcement Learning

Beginner
Stanford

Logistic regression is a simple method for binary classification that outputs a probability between 0 and 1 for class 1. It takes a weighted sum of input features (w^T x + b) and passes it through the sigmoid function. The sigmoid is an S-shaped curve that squashes any real number into the [0,1] range.

#logistic regression#binary classification#sigmoid
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 6 - LLM ReasoningML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 6 - LLM Reasoning

Beginner
Stanford

Machine learning lets computers learn patterns from data instead of following hand-made rules. Instead of writing instructions like “pointy ears = cat,” we feed many labeled examples and let the computer discover what features matter. This makes ML flexible and powerful for messy, real-world problems where rules are hard to write. Arthur Samuel’s classic definition captures this: computers learn without being explicitly programmed.

#machine learning#supervised learning#unsupervised learning
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 7 - Agentic LLMsML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 7 - Agentic LLMs

Beginner
Stanford

This lecture explains L1 regularization, also called LASSO, as a way to prevent overfitting by adding a penalty to the loss that depends on the absolute values of model weights. Overfitting means a model memorizes the training data but fails on new data. By penalizing large weights, L1 helps the model focus on the strongest, most useful features.

#l1 regularization#lasso#ridge regression
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 1 - TransformerML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 1 - Transformer

Beginner
Stanford

Decision trees are simple, powerful models that make predictions by asking a sequence of yes/no questions about input features. They work for both classification (like spam vs. not spam) and regression (like house price prediction). The tree’s structure—what to split on and when—comes directly from the data, which is why we call them non-parametric.

#decision tree#gini impurity#entropy
Stanford CS230 | Autumn 2025 | Lecture 10: What’s Going On Inside My Model?ML

Stanford CS230 | Autumn 2025 | Lecture 10: What’s Going On Inside My Model?

Beginner
Stanford

This lecture teaches linear regression as a simple but powerful way to predict a number (output) from input features. The model assumes a straight-line relationship between inputs and output, written as y = W^T x. Each weight in W shows how much a feature pushes the prediction up or down.

#linear regression#normal equation#residual sum of squares
Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AIML

Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI

Beginner
Stanford

This lecture explains how to choose and fairly evaluate machine learning models so they work well on new, unseen data. The main goal is generalization, which means the model should not only do well on the training set but also on future examples. Overfitting (too complex) and underfitting (too simple) are the two big mistakes to avoid.

#model selection#generalization#overfitting
Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Preference ModelsML

Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Preference Models

Beginner
Stanford

Decision trees are models that turn feature values into decisions like Yes/No. You follow a path from the root question down through branches until you reach a leaf decision. They are great for problems where the target is discrete, like classifying outcomes. They also tolerate noisy data and can work even when some feature values are missing.

#decision tree#entropy#information gain
Stanford CS230 | Autumn 2025 | Lecture 6: AI Project StrategyML

Stanford CS230 | Autumn 2025 | Lecture 6: AI Project Strategy

Beginner
Stanford

This lecture explains decision trees as simple, rule-based models for classification and regression. A decision tree splits data by asking yes/no questions about features until the remaining data in a group is mostly one label (pure). Each leaf makes a final prediction by majority vote for classification or by average value for regression. Trees are powerful, easy to understand, and highly interpretable.

#decision tree#random forest#gini impurity
Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 |  Human-centered DesignML

Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Human-centered Design

Beginner
Stanford

Machine learning means building computer systems that learn patterns from data to make predictions or decisions. Instead of hand-coding rules like “if you see whiskers and triangle ears, it’s a cat,” we feed many labeled examples and let algorithms learn the rules. The goal is to generalize from past data to new, unseen cases in the future.

#machine learning#learning from data#prediction