Pika Labs, another generative AI video startup, released a demo reel for their new 1.0 model and itâs been blowing up X.
Some of what was demoed in the video:
A variety of text-to-video outputs in different styles, from 3D animation to slow-mo food shots to realistic looking animals
Also demos image-to-video, with an open text box for controlling the camera movement (âdolly outâ)
In a cinematic history deep cut, they also demo video-to-video by transforming Eadweard Muybridgeâs Horse in Motion into different styles
For existing videos, there was a quick demo of a video version of generative fill to expand the image size
Potentially most practical use demoed: being able to highlight an area of a video and use a text prompt to change it, such as changing a womanâs top or adding glasses to an animal.
Itâs a fast-paced, quick-cut video obviously showing off the best outputs, but still impressive.
Forbes has a good write-up about the origin of the Pika.
CEO and co-founder Demi Guo tried to enter an AI filmmaking contest but was frustrated with the existing tools. Her and her team didnât place in the competition.
So she did the obvious thing: launched her own AI video generator company.
She was already a Stanford computer science Ph.D. student, so this wasnât that far-fetched. But she dropped out in April this year with fellow student Chenlin Meng to launch Pika.
Now theyâve raised $55 million from rounds led by former GitHub CEO Nat Friedman and Lightspeed Venture Partners, valuing Pika Labs between $200 million and $300 million.
Unlike other video AI tools, the film industry is not their target market and theyâre not looking to replace film production.
Weâre not trying to build a product for film production. What weâre trying to do is something more for everyday consumers.
Reply