LatentSync

by jiafuzeng0 starssuccess

We present LatentSync, an end-to-end lip-sync method based on audio-conditioned latent diffusion models without any intermediate motion representation, diverging from previous diffusion-based lip-sync methods based on pixel-space diffusion or two-stage generation. Our framework can leverage the powerful capabilities of Stable Diffusion to directly model complex audio-visual correlations.

View on GitHub

Nodes (0)

No node definitions found for this pack.

LatentSync | TealPug Node Explorer