πŸŽ“How I Study AIHISA
πŸ“–Read
πŸ“„PapersπŸ“°Blogs🎬Courses
πŸ’‘Learn
πŸ›€οΈPathsπŸ“šTopicsπŸ’‘Concepts🎴Shorts
🎯Practice
πŸ“Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers6

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#latent space

Sci-CoE: Co-evolving Scientific Reasoning LLMs via Geometric Consensus with Sparse Supervision

Intermediate
Xiaohan He, Shiyang Feng et al.Feb 12arXiv

Sci-CoE is a two-stage training method that helps one language model learn to both solve science problems and check those solutions with very little labeled data.

#scientific reasoning#co-evolution#solver-verifier

NativeTok: Native Visual Tokenization for Improved Image Generation

Intermediate
Bin Wu, Mengqi Huang et al.Jan 30arXiv

This paper fixes a hidden mismatch in image generation: tokenizers make tokens without order, but generators need an order to predict the next token well.

#visual tokenization#autoregressive image generation#causal dependencies

When Personalization Misleads: Understanding and Mitigating Hallucinations in Personalized LLMs

Intermediate
Zhongxiang Sun, Yi Zhan et al.Jan 16arXiv

Personalized AI helpers can accidentally copy a user’s past opinions instead of telling objective facts, which the authors call personalization-induced hallucinations.

#personalized large language models#hallucination#factuality

A unified framework for detecting point and collective anomalies in operating system logs via collaborative transformers

Intermediate
Mohammad Nasirzadeh, Jafar Tahmoresnezhad et al.Dec 29arXiv

CoLog is a new AI system that reads computer logs like a story and spots both single strange events (point anomalies) and strange patterns over time (collective anomalies).

#log anomaly detection#multimodal learning#collaborative transformer

Towards Scalable Pre-training of Visual Tokenizers for Generation

Intermediate
Jingfeng Yao, Yuda Song et al.Dec 15arXiv

The paper tackles a paradox: visual tokenizers that get great pixel reconstructions often make worse images when used for generation.

#visual tokenizer#latent space#Vision Transformer

World Models That Know When They Don't Know: Controllable Video Generation with Calibrated Uncertainty

Intermediate
Zhiting Mei, Tenny Yin et al.Dec 5arXiv

This paper teaches video-making AI models to say how sure they are about each tiny part of every frame they create.

#controllable video generation#uncertainty quantification#calibration