lip-sync models at a glance

Step into the realm of sync., where our suite of lip-sync models is engineered to animate your videos with naturalistic lip movement, syncing audio to video effortlessly.

capabilities of our models

  • universal fit: models work with any spoken audio and any face, right away.
  • precision: they specifically fine-tune the mouth area to sync with the audio, leaving the rest of the video untouched.

model selection guide

sync-1.5.1-beta

top-tier sync: our most advanced model for the highest sync accuracy and lifelike skin tones. when to switch: if you notice issues with teeth appearance or mouth color, consider other models.

sync-1.5.0

stable and tested: this version is our backbone, proven across a variety of videos. why choose this: if the beta model doesn’t meet your needs or you’re moving up from an older model, this is your best bet.

sync-1

legacy support: now surpassed by newer models, available via API, it automatically upgrades to sync 1.5.0 for better results.

understanding model limitations

To get the most out of our models, here’s what you should keep in mind:

  • face visibility: if the face is hidden or turned away, the model won’t sync that section.
  • single face focus: our models currently sync one face at a time. In videos with multiple faces, they may not select the correct one.
  • forward-facing works best: steep angles or side profiles can be tricky for our models.
  • resolution limit: best results are seen with faces up to 512 pixels in size. Knowing these points helps you align your video inputs with our model strengths for superior lip-sync results.