• VP Land
  • Posts
  • Behind the Scenes: Tools to Create an AI Animated Short

Behind the Scenes: Tools to Create an AI Animated Short

Our team recently completed an animated short film for Amazon's first Culver Cup AI filmmaking competition using a mix of traditional filmmaking and AI tools.

We obviously cover AI tools a lot on the newsletter and YouTube channel, but this time we wanted to put them to work, getting real-world experience with what they can (and can't) do. We also wanted to push traditional filmmaking techniques, blending workflows with real cameras and people with AI tools.

This article will break down the tools and workflows used. But before diving in, check out Skillet & The Jetport Diner on Escape AI.

Behind the Scenes

We used a hybrid approach, filming real actors on green screens and transforming them into CG characters.

The Process:

  • Recording performances on iPhone against green screen. We used Skyglass to bring in an image of the environment and composite in real-time

  • Using Wonder Dynamics to transform human actors into consistent CG characters

  • Leveraging Runway's video-to-video generation for style transfer

  • Creating environments and images with tools like Playbook and Ideogram and then turning them into video with Luma AI or Runway

The Tools

Writing

Production

  • Global Objects provided a 3D model of a real-diner, without any textures

  • Playbook allowed us to work with the 3D model, work with a 3D camera for precise framing, then retexture the output of the diner into whatever style we needed

  • Skyglass for real-time compositing when recording people

  • Wonder Dynamics for transforming human performances into CG characters

Image & Video

  • Ideogram for static image generation and menu cards

  • Cuebric for segmented background generation

  • Runway for image-to-video and video-to-video style transfer

  • Luma AI's Dream Machine for image-to-video generation, sometimes using start and end frame keyframes

  • Kling and Hailuo for additional text-to-video generation

  • Tripo AI for high-quality, ready-to-use 3D model generation

  • Unreal Engine for combining multiple elements and creating controlled 3D camera movements

  • Blender for 3D CG content source

  • Davinci Resolve for video editing

  • Adobe for graphic design

Sound

What's Next

We had some initial workflows in mind but hit technical roadblocks or time limits.

  • Using ComfyUI for more precise control over image generation and consistency

  • Implementing Unreal Engine's MetaHuman and Move AI for higher-fidelity character animation

  • Exploring layer separation techniques to better control individual elements

Reply

or to participate.