๐ŸŽ“How I Study AIHISA
๐Ÿ“–Read
๐Ÿ“„Papers๐Ÿ“ฐBlogs๐ŸŽฌCourses
๐Ÿ’กLearn
๐Ÿ›ค๏ธPaths๐Ÿ“šTopics๐Ÿ’กConcepts๐ŸŽดShorts
๐ŸŽฏPractice
๐Ÿ“Daily Log๐ŸŽฏPrompts๐Ÿง Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers4

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#FlashAttention compatibility

Spectral Attention Steering for Prompt Highlighting

Beginner
Weixian Waylon Li, Yuchen Niu et al.Mar 1arXiv

This paper teaches a new way to make a language model pay extra attention to the exact words you highlight in a prompt.

#attention steering#prompt highlighting#key embeddings

CoPE: Clipped RoPE as A Scalable Free Lunch for Long Context LLMs

Beginner
Haoran Li, Sucheng Ren et al.Feb 5arXiv

The paper introduces CoPE, a simple change to how models track word positions that makes long documents much easier for them to understand.

#CoPE#RoPE#Rotary Positional Embedding

OmniSIFT: Modality-Asymmetric Token Compression for Efficient Omni-modal Large Language Models

Intermediate
Yue Ding, Yiyan Ji et al.Feb 4arXiv

OmniSIFT is a new way to shrink (compress) audio and video tokens so omni-modal language models can think faster without forgetting important details.

#Omni-LLM#token compression#modality-asymmetric

Enhancing Multi-Image Understanding through Delimiter Token Scaling

Intermediate
Minyoung Lee, Yeji Park et al.Feb 2arXiv

Large Vision-Language Models (LVLMs) are great with one picture but get confused when you give them several, often mixing details from different images.

#Large Vision-Language Models#Multi-image understanding#Delimiter tokens