๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers3

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#parameter-efficient fine-tuning

Effective Reasoning Chains Reduce Intrinsic Dimensionality

Beginner
Archiki Prasad, Mandar Joshi et al.Feb 9arXiv

The paper asks a simple question: which kind of step-by-step reasoning helps small language models learn best, and why?

#intrinsic dimensionality#chain-of-thought#LoRA

Learning Rate Matters: Vanilla LoRA May Suffice for LLM Fine-tuning

Beginner
Yu-Ang Lee, Ching-Yun Ko et al.Feb 4arXiv

When you tune the learning rate carefully, plain old LoRA fine-tuning works about as well as fancy new versions.

#LoRA#parameter-efficient fine-tuning#learning rate tuning

Evaluating Parameter Efficient Methods for RLVR

Intermediate
Qingyu Yin, Yulun Wu et al.Dec 29arXiv

The paper asks which small, add-on training tricks (PEFT) work best when we teach language models with yes/no rewards we can check (RLVR).

#RLVR#parameter-efficient fine-tuning#LoRA