Meet OpenAI’s New Text-To-Video AI, Sora

The Fall of Google Search: Part 3 - Results Quality

The Fall of Google Search: Part 3 - Results Quality

Alexandra Lustig

Alexandra Lustig

Alexandra Lustig

Feb 15, 2024

Feb 15, 2024

Feb 15, 2024

Sora AI Video Still
Sora AI Video Still
Sora AI Video Still

OpenAI just released a new text-to-video AI called Sora. Why companies are still releasing products with bizarre feminine-sounding names is beyond me, but I digress...


According to their site, “Sora is an AI model that can create realistic and imaginative scenes from text instructions,” and can generate videos up to a minute long.


This beautiful video of a handsome Dalmatian pup walking across colorful windowsills has the following text prompt:


”Prompt: The camera directly faces colorful buildings in burano italy. An adorable dalmation looks through a window on a building on the ground floor. Many people are walking and cycling along the canal streets in front of the buildings.”


It’s apparently able to create complex scenes with multiple characters, specific types of motion, and details of the subject and background. They claim that, unlike other AI text-to-video models that can generate wonky and uncanny-valley-type content, Sora understands the context behind the user’s prompt and how those things exist in the real world.


This first-person view of a gorgeous art gallery had the following text prompt:

“Prompt: Tour of an art gallery with many beautiful works of art in different styles.”


It’s not yet available to the public, but they are granting a select few lucky visual artists, designers, and filmmakers access to get feedback. It seems this new product is mostly geared toward creatives and creative professionals.

Don’t worry, there’s a whole lot on Safety, too.

OpenAI claims they’re taking “several important safety steps” before they make it available to the public. They’re working with “red teamers” who are experts in misinformation, hateful content, and bias, to adversarially test the model.

OpenAI is also apparently building tools to help detect misleading content, such as a classifier that can detect when an image was created by Sora, as well as embedding C2PA metadata in future iterations of the model.

They also claim that their text classifier will check and reject text input prompts that “are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.”

This no doubt comes after the huge outcry from the creative community about stolen artwork, visual content, and other creative assets, and the celebrity outcry from people like Tom Hanks, who posted a video on his Instagram page last year warning his followers about a company using his likeness to promote a dental plan, and the recent, extremely worrying AI-generated porn of pop star, Taylor Swift.

OpenAI also claims they will be engaging with policymakers, educators, and artists around the world to “understand their concerns and to identify positive use cases for this new technology.” If you’d like to dig into the nerdy details of how exactly Sora works, check out their page here and look forward to a possible deep-dive from us soon.



This article was originally published on a Squarespace domain on 2/15/24. Comments from that domain have been lost.

OpenAI just released a new text-to-video AI called Sora. Why companies are still releasing products with bizarre feminine-sounding names is beyond me, but I digress...


According to their site, “Sora is an AI model that can create realistic and imaginative scenes from text instructions,” and can generate videos up to a minute long.


This beautiful video of a handsome Dalmatian pup walking across colorful windowsills has the following text prompt:


”Prompt: The camera directly faces colorful buildings in burano italy. An adorable dalmation looks through a window on a building on the ground floor. Many people are walking and cycling along the canal streets in front of the buildings.”


It’s apparently able to create complex scenes with multiple characters, specific types of motion, and details of the subject and background. They claim that, unlike other AI text-to-video models that can generate wonky and uncanny-valley-type content, Sora understands the context behind the user’s prompt and how those things exist in the real world.


This first-person view of a gorgeous art gallery had the following text prompt:

“Prompt: Tour of an art gallery with many beautiful works of art in different styles.”


It’s not yet available to the public, but they are granting a select few lucky visual artists, designers, and filmmakers access to get feedback. It seems this new product is mostly geared toward creatives and creative professionals.

Don’t worry, there’s a whole lot on Safety, too.

OpenAI claims they’re taking “several important safety steps” before they make it available to the public. They’re working with “red teamers” who are experts in misinformation, hateful content, and bias, to adversarially test the model.

OpenAI is also apparently building tools to help detect misleading content, such as a classifier that can detect when an image was created by Sora, as well as embedding C2PA metadata in future iterations of the model.

They also claim that their text classifier will check and reject text input prompts that “are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.”

This no doubt comes after the huge outcry from the creative community about stolen artwork, visual content, and other creative assets, and the celebrity outcry from people like Tom Hanks, who posted a video on his Instagram page last year warning his followers about a company using his likeness to promote a dental plan, and the recent, extremely worrying AI-generated porn of pop star, Taylor Swift.

OpenAI also claims they will be engaging with policymakers, educators, and artists around the world to “understand their concerns and to identify positive use cases for this new technology.” If you’d like to dig into the nerdy details of how exactly Sora works, check out their page here and look forward to a possible deep-dive from us soon.



This article was originally published on a Squarespace domain on 2/15/24. Comments from that domain have been lost.

OpenAI just released a new text-to-video AI called Sora. Why companies are still releasing products with bizarre feminine-sounding names is beyond me, but I digress...


According to their site, “Sora is an AI model that can create realistic and imaginative scenes from text instructions,” and can generate videos up to a minute long.


This beautiful video of a handsome Dalmatian pup walking across colorful windowsills has the following text prompt:


”Prompt: The camera directly faces colorful buildings in burano italy. An adorable dalmation looks through a window on a building on the ground floor. Many people are walking and cycling along the canal streets in front of the buildings.”


It’s apparently able to create complex scenes with multiple characters, specific types of motion, and details of the subject and background. They claim that, unlike other AI text-to-video models that can generate wonky and uncanny-valley-type content, Sora understands the context behind the user’s prompt and how those things exist in the real world.


This first-person view of a gorgeous art gallery had the following text prompt:

“Prompt: Tour of an art gallery with many beautiful works of art in different styles.”


It’s not yet available to the public, but they are granting a select few lucky visual artists, designers, and filmmakers access to get feedback. It seems this new product is mostly geared toward creatives and creative professionals.

Don’t worry, there’s a whole lot on Safety, too.

OpenAI claims they’re taking “several important safety steps” before they make it available to the public. They’re working with “red teamers” who are experts in misinformation, hateful content, and bias, to adversarially test the model.

OpenAI is also apparently building tools to help detect misleading content, such as a classifier that can detect when an image was created by Sora, as well as embedding C2PA metadata in future iterations of the model.

They also claim that their text classifier will check and reject text input prompts that “are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.”

This no doubt comes after the huge outcry from the creative community about stolen artwork, visual content, and other creative assets, and the celebrity outcry from people like Tom Hanks, who posted a video on his Instagram page last year warning his followers about a company using his likeness to promote a dental plan, and the recent, extremely worrying AI-generated porn of pop star, Taylor Swift.

OpenAI also claims they will be engaging with policymakers, educators, and artists around the world to “understand their concerns and to identify positive use cases for this new technology.” If you’d like to dig into the nerdy details of how exactly Sora works, check out their page here and look forward to a possible deep-dive from us soon.



This article was originally published on a Squarespace domain on 2/15/24. Comments from that domain have been lost.

Start your next project with Gitsul.

© 2024 GITSUL GROUP LLC - ALL RIGHTS RESERVED.

Start your next project with Gitsul.

© 2024 GITSUL GROUP LLC - ALL RIGHTS RESERVED.

Start your next project with Gitsul.

© 2024 GITSUL GROUP LLC - ALL RIGHTS RESERVED.