🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers791

AllBeginnerIntermediateAdvanced
All SourcesarXiv

mHC: Manifold-Constrained Hyper-Connections

Intermediate
Zhenda Xie, Yixuan Wei et al.Dec 31arXiv

The paper fixes a stability problem in Hyper-Connections (HC) by gently steering the network’s mixing matrix onto a safe shape (a manifold) where signals don’t blow up or vanish.

#Residual Connections#Hyper-Connections#Manifold Projection

Let It Flow: Agentic Crafting on Rock and Roll, Building the ROME Model within an Open Agentic Learning Ecosystem

Intermediate
Weixun Wang, XiaoXiao Xu et al.Dec 31arXiv

This paper builds an open, end-to-end ecosystem (ALE) that lets AI agents plan, act, and fix their own mistakes across many steps in real computer environments.

#agentic LLMs#reinforcement learning#IPA

Dream2Flow: Bridging Video Generation and Open-World Manipulation with 3D Object Flow

Intermediate
Karthik Dharmarajan, Wenlong Huang et al.Dec 31arXiv

Dream2Flow lets a robot watch a short, AI-generated video of a task and then do that task in real life by following object motion in 3D.

#3D object flow#video generation for robotics#open-world manipulation

FlowBlending: Stage-Aware Multi-Model Sampling for Fast and High-Fidelity Video Generation

Intermediate
Jibin Song, Mingi Kwon et al.Dec 31arXiv

FlowBlending is a simple way to speed up video diffusion models by smartly choosing when to use a big model and when a small one is enough.

#FlowBlending#stage-aware sampling#video diffusion

Nested Learning: The Illusion of Deep Learning Architectures

Intermediate
Ali Behrouz, Meisam Razaviyayn et al.Dec 31arXiv

The paper introduces Nested Learning, a new way to build AI that learns in layers (like Russian dolls), so each part can update at its own speed and remember different things.

#Nested Learning#Associative Memory#In-Context Learning

Youtu-LLM: Unlocking the Native Agentic Potential for Lightweight Large Language Models

Intermediate
Junru Lu, Jiarui Qin et al.Dec 31arXiv

Youtu-LLM is a small (1.96B) language model that was trained from scratch to think, plan, and act like an agent instead of just copying bigger models.

#lightweight LLM#agentic mid-training#trajectory data

Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space

Intermediate
Xingwei Qu, Shaowen Wang et al.Dec 31arXiv

Language is lumpy: easy stretches and tricky jumps are mixed together, but old models spend the same effort on every word.

#Dynamic Large Concept Models#semantic boundaries#latent reasoning

Youtu-Agent: Scaling Agent Productivity with Automated Generation and Hybrid Policy Optimization

Intermediate
Yuchen Shi, Yuzheng Cai et al.Dec 31arXiv

Youtu-Agent is a build-and-grow factory for AI agents that cuts manual setup and keeps agents improving over time.

#LLM agents#automated agent generation#modular architecture

PhyGDPO: Physics-Aware Groupwise Direct Preference Optimization for Physically Consistent Text-to-Video Generation

Intermediate
Yuanhao Cai, Kunpeng Li et al.Dec 31arXiv

This paper teaches text-to-video models to follow real-world physics, so people, balls, water, glass, and fire act the way they should.

#text-to-video generation#physical consistency#direct preference optimization

SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning

Intermediate
Yong Xien Chng, Tao Hu et al.Dec 30arXiv

SenseNova-MARS is a vision-language model that can think step-by-step and use three tools—text search, image search, and image cropping—during its reasoning.

#multimodal agent#vision-language model#reinforcement learning

Figure It Out: Improve the Frontier of Reasoning with Executable Visual States

Intermediate
Meiqi Chen, Fandong Meng et al.Dec 30arXiv

FIGR is a new way for AI to ‘think by drawing,’ using code to build clean, editable diagrams while it reasons.

#executable visual states#diagrammatic reasoning#reinforcement learning for reasoning

Taming Hallucinations: Boosting MLLMs' Video Understanding via Counterfactual Video Generation

Intermediate
Zhe Huang, Hao Wen et al.Dec 30arXiv

Multimodal Large Language Models (MLLMs) often hallucinate on videos by trusting words and common sense more than what the frames really show.

#multimodal large language model#video understanding#visual hallucination
4041424344