AI video has a moment with Sora app release

Your guide to getting the most out of generative AI tools

Two top AI companies are taking on TikTok with a feed of AI-generated videos. One launch felt like a flop, but the other is reportedly breaking the internet.

First, Meta rolled out Vibes, a new feed in the Meta AI app (and on meta.ai) built exclusively for AI-generated short videos. Users can create from scratch, remix existing clips by changing visuals or music, and post directly or cross-share to Instagram and Facebook Stories/Reels. The feed will personalize over time and aims to showcase creative AI content from artists and communities.

But the real splash came when OpenAI released Sora, an invite-only iOS app where users can generate and share 10-second AI videos with friends. Videos are surfaced in a TikTok-style feed and users can remix content made by others. The app includes a “cameo” feature letting people insert themselves or others into these AI scenes, with control over who can use your likeness.

Even OpenAI CEO Sam Altman got in on the fun, letting people use his likeness in their remix images. (continues below)

Sign up for Oct. 14 ChatGPT Workshop

Our next workshop will help you Become a ChatGPT Power User. We will explore all of the features of ChatGPT - from Deep Research to Custom GPTs to Image and Video Creation to integration with tools like Google Drive and Gmail. Attend in-person at Drake University or virtually.

What: Become a ChatGPT Power User
When: Oct. 14, 9 a.m. to 2:30 p.m.
Price: Early bird rate of $125 through Oct. 7 (Use code EARLY to save an additional $25 if you sign up by 5 p.m. today)
Where: Drake University or attend virtually
Sign up now!

As I type this, Sora by OpenAI is the number one app in the Apple App Store, ahead of Gemini and ChatGPT. Users must download the app to get access to Sora, and then wait for (or have someone share with them) an access code.

The Meta AI app, meanwhile, is nowhere to be seen in the App Store rankings.

So what makes Sora a success? First, it uses a new AI model, Sora 2. It’s more physically accurate, more realistic, and more controllable than previous versions. And it supports synchronized dialogue and sound effects, not just visuals.

It can generate complex, dynamic scenes (gymnastics, backflips, physical interactions) in a way that respects physics (e.g. objects rebound instead of teleporting). It supports consistency across shots and can obey instructions over multi-shot sequences, while maintaining world state.

And then there is the “cameo” feature. Users can use cameos to insert themselves or friends into generated scenes via a short recording, making this content something people actually want to share on other social networks.

We’re eager to try out Sora 2 for ourselves (anyone want to share an invite code?). We’ll share more on our Instagram when we have access.