๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
โฑ๏ธCoach๐ŸงฉProblems๐Ÿง Thinking๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

๐ŸŽฌAI Lectures43

๐Ÿ“šAll๐Ÿ“LLM๐ŸŽฏPrompts๐Ÿ”RAG๐ŸคAgents๐Ÿง Deep Learning๐Ÿ’ฌNLP๐Ÿค–ML๐Ÿ“–Basics
Difficulty:
AllBeginnerIntermediateAdvanced
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 3 - Tranformers & Large Language ModelsBasics

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 3 - Tranformers & Large Language Models

Beginner
Stanford

Artificial Intelligence (AI) is the science of making machines do tasks that would need intelligence if a person did them. Todayโ€™s AI mostly focuses on specific tasks like recognizing faces or recommending products, which is called narrow AI. A future goal is general AI, which would do any thinking task a human can, but it does not exist yet.

#artificial intelligence#narrow ai#general ai
1234
Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 2 - Transformer-Based Models & TricksML

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 2 - Transformer-Based Models & Tricks

Intermediate
Stanford

Principal Component Analysis (PCA) is a method to turn high-dimensional data into a smaller set of numbers while keeping as much useful information as possible. The lecture explains three equivalent views of PCA: best low-dimensional representation, directions of maximum variance, and best reconstruction after projection. These views all lead to the same solution using eigenvectors and eigenvalues of a certain matrix built from the data.

#principal component analysis#pca#dimensionality reduction
Stanford CS230 | Autumn 2025 | Lecture 1: Introduction to Deep LearningDeep Learning

Stanford CS230 | Autumn 2025 | Lecture 1: Introduction to Deep Learning

Beginner
Stanford Online

This lecture kicks off Stanford CS230 and explains what deep learning is: a kind of machine learning that uses multi-layer neural networks to learn complex patterns. Andrew Ng highlights its strength in understanding images, language, and speech by learning layered features like edges, textures, and objects. The message is that deep learningโ€™s power comes from big data, flexible architectures, and non-linear functions that let models represent complex relationships.

#deep learning#neural network#perceptron
Stanford CS230 | Autumn 2025 | Lecture 10: Whatโ€™s Going On Inside My Model?ML

Stanford CS230 | Autumn 2025 | Lecture 10: Whatโ€™s Going On Inside My Model?

Beginner
Stanford

This lecture teaches linear regression as a simple but powerful way to predict a number (output) from input features. The model assumes a straight-line relationship between inputs and output, written as y = W^T x. Each weight in W shows how much a feature pushes the prediction up or down.

#linear regression#normal equation#residual sum of squares
Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AIML

Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI

Beginner
Stanford

This lecture explains how to choose and fairly evaluate machine learning models so they work well on new, unseen data. The main goal is generalization, which means the model should not only do well on the training set but also on future examples. Overfitting (too complex) and underfitting (too simple) are the two big mistakes to avoid.

#model selection#generalization#overfitting
Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Preference ModelsML

Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Preference Models

Beginner
Stanford

Decision trees are models that turn feature values into decisions like Yes/No. You follow a path from the root question down through branches until you reach a leaf decision. They are great for problems where the target is discrete, like classifying outcomes. They also tolerate noisy data and can work even when some feature values are missing.

#decision tree#entropy#information gain
Stanford CS230 | Autumn 2025 | Lecture 6: AI Project StrategyML

Stanford CS230 | Autumn 2025 | Lecture 6: AI Project Strategy

Beginner
Stanford

This lecture explains decision trees as simple, rule-based models for classification and regression. A decision tree splits data by asking yes/no questions about features until the remaining data in a group is mostly one label (pure). Each leaf makes a final prediction by majority vote for classification or by average value for regression. Trees are powerful, easy to understand, and highly interpretable.

#decision tree#random forest#gini impurity
Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | EthicsBasics

Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Ethics

Beginner
Stanford

The lecture explains regularization, a method to reduce overfitting by adding a penalty to the cost (loss) function that discourages overly complex models. Overfitting is when a model memorizes noise in the training data and fails to generalize. Regularization keeps model parameters (weights) from growing too large, which helps models generalize better to new data.

#regularization#l1#l2
Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 |  Human-centered DesignML

Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Human-centered Design

Beginner
Stanford

Machine learning means building computer systems that learn patterns from data to make predictions or decisions. Instead of hand-coding rules like โ€œif you see whiskers and triangle ears, itโ€™s a cat,โ€ we feed many labeled examples and let algorithms learn the rules. The goal is to generalize from past data to new, unseen cases in the future.

#machine learning#learning from data#prediction
Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | VotingBasics

Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Voting

Beginner
Stanford

This lecture kicks off an Introduction to Machine Learning course by explaining how the class will run and what you will learn. The instructor, Byron Wallace, introduces the TAs (Max and Zohair), office hours, and where to find all materials (Piazza/Canvas). The course has weekly graded homework, a midterm, and a group final project with a proposal, report, and presentation. Lectures are mostly theory and recorded; hands-on coding happens in homework.

#supervised learning#classification#regression
Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | IntroductionML

Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Introduction

Beginner
Stanford

This lecture introduces decision trees, a classic machine learning method that makes choices by asking a series of yes/no or small set-of-value questions. Each internal node is a test on a feature (like 'Is it Friday or Saturday?'), each branch is the outcome, and each leaf is a predicted label. You classify by starting at the root, following the test outcomes, and stopping at a leaf.

#decision trees#entropy#information gain
Stanford CS329H: Machine Learning from Human Preferences I Guest Lecture: Joseph Jay WilliamsML

Stanford CS329H: Machine Learning from Human Preferences I Guest Lecture: Joseph Jay Williams

Beginner
Stanford

Machine learning is about computers learning patterns from data instead of being told each rule by a programmer. Rather than hardcoding how to spot a cat in a photo, you show many labeled cat and non-cat images, and the computer learns what features matter. This approach shines when problems are too complex to describe with step-by-step rules.

#machine learning#supervised learning#unsupervised learning