🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
🧩Problems🎯Prompts🧠Review
Search
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers129

AllBeginnerIntermediateAdvanced
All SourcesarXiv

STEP3-VL-10B Technical Report

Beginner
Ailin Huang, Chengyuan Yao et al.Jan 14arXiv

STEP3-VL-10B is a small (10 billion parameters) open multimodal model that sees images and reads text, yet scores like much larger models.

#multimodal foundation model#unified pre-training#perception encoder

ProFit: Leveraging High-Value Signals in SFT via Probability-Guided Token Selection

Beginner
Tao Liu, Taiqiang Wu et al.Jan 14arXiv

Traditional supervised fine-tuning (SFT) makes a model copy one answer too exactly, which can cause overfitting to the exact wording instead of the real idea.

#ProFit#Supervised Fine-Tuning#Token Probability

Entropy Sentinel: Continuous LLM Accuracy Monitoring from Decoding Entropy Traces in STEM

Beginner
Pedro Memoli Buffa, Luciano Del CorroJan 13arXiv

The paper introduces Entropy Sentinel, a simple way to watch how accurate an AI is by reading its “uncertainty heartbeat” during generation.

#LLM monitoring#entropy profile#top-k probabilities

Ministral 3

Beginner
Alexander H. Liu, Kartik Khandelwal et al.Jan 13arXiv

Ministral 3 is a new family of small-but-mighty AI language models (3B, 8B, 14B) that learn from a larger model using a step-by-step tutoring method called Cascade Distillation.

#Cascade Distillation#Model pruning#Logit distillation

Enhancing Sentiment Classification and Irony Detection in Large Language Models through Advanced Prompt Engineering Techniques

Beginner
Marvin Schmitt, Anne Schwerk et al.Jan 13arXiv

Giving large language models a few good examples and step-by-step instructions can make them much better at spotting feelings in text.

#prompt engineering#few-shot learning#chain-of-thought

The Agent's First Day: Benchmarking Learning, Exploration, and Scheduling in the Workplace Scenarios

Beginner
Daocheng Fu, Jianbiao Mei et al.Jan 13arXiv

The paper introduces Trainee-Bench, a new way to test AI agents that feels like a real first day at work, with tasks arriving over time, hidden clues, and changing priorities.

#Trainee-Bench#dynamic task scheduling#active exploration

The Confidence Dichotomy: Analyzing and Mitigating Miscalibration in Tool-Use Agents

Beginner
Weihao Xuan, Qingcheng Zeng et al.Jan 12arXiv

This paper studies how AI agents that use tools talk about how sure they are and finds a split: some tools make them too sure, others help them be honest.

#LLM agents#calibration#overconfidence

RealMem: Benchmarking LLMs in Real-World Memory-Driven Interaction

Beginner
Haonan Bian, Zhiyuan Yao et al.Jan 11arXiv

RealMem is a new benchmark that tests how well AI assistants remember and manage long, ongoing projects across many conversations.

#RealMem#long-term memory#project-oriented interactions

MemGovern: Enhancing Code Agents through Learning from Governed Human Experiences

Beginner
Qihao Wang, Ziming Cheng et al.Jan 11arXiv

MemGovern teaches code agents to learn from past human fixes on GitHub by turning messy discussions into clean, reusable 'experience cards.'

#MemGovern#experience governance#agentic search

ArenaRL: Scaling RL for Open-Ended Agents via Tournament-based Relative Ranking

Beginner
Qiang Zhang, Boli Chen et al.Jan 10arXiv

ArenaRL teaches AI agents by comparing their answers against each other, like a sports tournament, instead of giving each answer a single noisy score.

#ArenaRL#reinforcement learning#relative ranking

Illusions of Confidence? Diagnosing LLM Truthfulness via Neighborhood Consistency

Beginner
Haoming Xu, Ningyuan Zhao et al.Jan 9arXiv

LLMs can look confident but still change their answers when the surrounding text nudges them, showing that confidence alone isn’t real truthfulness.

#Neighbor-Consistency Belief#belief robustness#self-consistency

Goal Force: Teaching Video Models To Accomplish Physics-Conditioned Goals

Beginner
Nate Gillman, Yinghua Zhou et al.Jan 9arXiv

Video models can now be told what physical result you want (like “make this ball move left with a strong push”) using Goal Force, instead of just vague text or a final picture.

#goal force#force vector control#visual planning
34567