🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Search

"Pareto frontier"13 resultsKeyword

NanoQuant: Efficient Sub-1-Bit Quantization of Large Language Models

Intermediate
Hyochan Chong, Dongkyu Kim et al.Feb 6arXiv

NanoQuant is a new way to shrink large language models down to 1-bit and even less than 1-bit per weight without retraining on huge datasets.

#post-training quantization#sub-1-bit quantization#binary LLMs

Balancing Understanding and Generation in Discrete Diffusion Models

Intermediate
Yue Liu, Yuzhong Zhao et al.Feb 1arXiv

This paper introduces XDLM, a single model that blends two popular diffusion styles (masked and uniform) so it both understands and generates text and images well.

#XDLM#discrete diffusion#stationary noise kernel

GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

Intermediate
Lakshya A Agrawal, Shangyin Tan et al.Jul 25arXiv

GEPA is a new way to improve AI prompts by letting the AI read its own work, reflect in plain language on what went wrong, and then rewrite its instructions.

#GEPA#reflective prompt evolution#Pareto frontier

ECO: Quantized Training without Full-Precision Master Weights

Intermediate
Mahdi Nikdan, Amir Zandieh et al.Jan 29arXiv

Training big AI models uses lots of memory because most methods still keep a secret full-precision copy of the weights called master weights.

#quantized training#master weights#error feedback

ThreadWeaver: Adaptive Threading for Efficient Parallel Reasoning in Language Models

Intermediate
Long Lian, Sida Wang et al.Nov 24arXiv

ThreadWeaver teaches a language model to split big problems into smaller parts it can solve at the same time, like teammates working in parallel.

#adaptive parallel reasoning#fork–join#threaded inference

TreeGRPO: Tree-Advantage GRPO for Online RL Post-Training of Diffusion Models

Intermediate
Zheng Ding, Weirui YeDec 9arXiv

TreeGRPO teaches image generators using a smart branching tree so each training run produces many useful learning signals instead of just one.

#TreeGRPO#reinforcement learning#diffusion models

PRISM: Pushing the Frontier of Deep Think via Process Reward Model-Guided Inference

Intermediate
Rituraj Sharma, Weiyuan Chen et al.Mar 3arXiv

PRISM is a new way to help AI think through hard problems by checking each step, not just the final answer.

#DEEPTHINK#Process Reward Model#step-level verification

Scaling Embeddings Outperforms Scaling Experts in Language Models

Intermediate
Hong Liu, Jiaqi Zhang et al.Jan 29arXiv

The paper shows that growing the embedding part of a language model (especially with n-grams) can beat adding more MoE experts once you pass a certain sparsity 'sweet spot.'

#N-gram Embedding#Mixture-of-Experts (MoE)#Embedding Scaling

Toward Efficient Agents: Memory, Tool learning, and Planning

Intermediate
Xiaofang Yang, Lijun Li et al.Jan 20arXiv

This survey explains how to make AI agents not just smart, but also efficient with their time, memory, and tool use.

#agent efficiency#memory compression#tool learning

Quant VideoGen: Auto-Regressive Long Video Generation via 2-Bit KV-Cache Quantization

Intermediate
Haocheng Xi, Shuo Yang et al.Feb 3arXiv

Auto-regressive video models make videos one chunk at a time but run out of GPU memory because the KV-cache grows with history.

#Quant VideoGen (QVG)#KV-cache quantization#2-bit quantization

Generative Visual Code Mobile World Models

Intermediate
Woosung Koh, Sungjun Han et al.Feb 2arXiv

This paper shows a new way to predict what a phone screen will look like after you tap or scroll: generate web code (like HTML/CSS/SVG) and then render it to pixels.

#mobile GUI#world model#vision-language model

Parallel-Probe: Towards Efficient Parallel Thinking via 2D Probing

Intermediate
Tong Zheng, Chengsong Huang et al.Feb 3arXiv

Parallel-Probe is a simple add-on that lets many AI “thought paths” think at once but stop early when they already agree.

#parallel thinking#2D probing#consensus-based early stopping

Accelerating Masked Image Generation by Learning Latent Controlled Dynamics

Intermediate
Kaiwen Zhu, Quansheng Zeng et al.Feb 27arXiv

Masked Image Generation Models (MIGMs) make pictures by filling in many blank spots step by step, but each step is slow and repeats a lot of work.

#masked image generation#MIGM-Shortcut#latent controlled dynamics