a note from sync.

we’re grateful to have you here.

we’re early - but we’re shipping, and we’re on a mission.

we’re breaking the language barrier 🌍.

we’re young, hungry, and uniquely experienced – shipping product + engineering + research is fun. beyond lipsync, there’s a world of models to build / understanding to create.

resolved to share as we go. blessed to have you along for the ride 🚀

overview

how does it work?

at the core of our product we’re building audio-visual models – generating video conditioned on a given audio.

all you need to do is:

  1. submit an audio + video
  2. we sync the video to the audio, no training required.
  3. get back a lipsynced video → [future] stream video back as it processes

fundamentally, we split our platform into two main parts:

  • the playground: create / experiment w/ a simple, intuitive UX
  • the api: plug into your apps / services / businesses + scale w/ demand

the playground

our playground is designed to be a simple way to try different models, play w/ parameters, and build an intuition around which models work best for which use case.

we’re optimizing for a simple UX + advanced configurability - you’ll be able to generate any piece of content w/ a suite of state of the art models without ever having to see a single line of code (eg. no more google collabs).

the api

our api is designed to be a simple way to integrate our models into your apps / platforms / services + scale as your users / use case does.