🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way

Papers8

AllBeginnerIntermediateAdvanced
All SourcesarXiv
#lip-sync

SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model

Intermediate
Guibin Chen, Dixuan Lin et al.Feb 25arXiv

SkyReels-V4 is a single, unified model that makes videos and matching sounds together, while also letting you fix or change parts of a video.

#multimodal diffusion transformer#video-audio generation#inpainting

DreamID-Omni: Unified Framework for Controllable Human-Centric Audio-Video Generation

Intermediate
Xu Guo, Fulong Ye et al.Feb 12arXiv

DreamID-Omni is one model that can create, edit, and animate human-centered videos with matching voices, all in sync.

#audio-video generation#diffusion transformer#identity preservation

Ex-Omni: Enabling 3D Facial Animation Generation for Omni-modal Large Language Models

Intermediate
Haoyu Zhang, Zhipeng Li et al.Feb 6arXiv

Ex-Omni is a new open-source AI system that can understand text or speech and then talk back while moving a 3D face in sync with the voice.

#omni-modal LLM#3D facial animation#lip-sync

Apollo: Unified Multi-Task Audio-Video Joint Generation

Intermediate
Jun Wang, Chunyu Qiang et al.Jan 7arXiv

APOLLO is a single, unified model that can make video and audio together or separately, and it keeps them tightly in sync.

#audio-video generation#multimodal diffusion#single-tower transformer

LTX-2: Efficient Joint Audio-Visual Foundation Model

Intermediate
Yoav HaCohen, Benny Brazowski et al.Jan 6arXiv

LTX-2 is an open-source model that makes video and sound together from a text prompt, so the picture and audio match in time and meaning.

#text-to-video#text-to-audio#audiovisual generation

LiveTalk: Real-Time Multimodal Interactive Video Diffusion via Improved On-Policy Distillation

Beginner
Ethan Chern, Zhulin Hu et al.Dec 29arXiv

LiveTalk turns slow, many-step video diffusion into a fast, 4-step, real-time system for talking avatars that listen, think, and respond with synchronized video.

#real-time video diffusion#on-policy distillation#multimodal conditioning

Seedance 1.5 pro: A Native Audio-Visual Joint Generation Foundation Model

Intermediate
Team Seedance, Heyi Chen et al.Dec 15arXiv

Seedance 1.5 pro is a single model that makes video and sound together at the same time, so lips, music, and actions match naturally.

#audio-visual generation#diffusion transformer#cross-modal synchronization

VABench: A Comprehensive Benchmark for Audio-Video Generation

Beginner
Daili Hua, Xizhi Wang et al.Dec 10arXiv

VABench is a new, all-in-one test that checks how well AI makes videos with matching sound and pictures.

#audio-video benchmark#synchronization#lip-sync