• VP Land
  • Posts
  • No, Barry Jenkins Doesn't Hate Virtual Filmmaking

No, Barry Jenkins Doesn't Hate Virtual Filmmaking

SORA, HERE, and even more AI video models

In partnership with

Welcome to VP Land!

SORA is out in the wild. But with all the impressive models that have launched since it was first announced (Kling, Hailuo, Runway Gen-3), does it really matter that much anymore?

From the demos I’ve seen (because I’m waitlisted for access), the more impressive features aren’t the outputs but the options you have to control the generations, from merging multiple shots to weighting your modifications to their prompt storyboard feature.

Similar vibes to what we saw with Luma’s Dream Machine overhaul.

Once we get access, we’ll dig deeper into this. For now, let’s jump into a digital Pride Rock and VP Land.

Joey

Barry Jenkins on Virtual Filmmaking for Mufasa

Barry Jenkins, the acclaimed director of the short film Chlorophyl (oh, and Moonlight), had a quote circulating around X last week about his thoughts on all digital filmmaking: "Not my thing"

The quote came from a long-form piece from Vulture about his upcoming, all-digital Mufasa film.

But if you dig into the actual article and beyond the hot take tweets, there's actually a trove of really interesting tidbits on how Barry and his team adapted virtual filmmaking to fit their style, make it their own, and how they'd use the technology learned for future projects.

Behind the Scenes

  • Jenkins and longtime DP James Laxton used Unreal Vcam and VR headsets to navigate around Unreal Engine scenes built by production designer Mark Friedberg. Jenkins and Laxton would scout around the virtual environments and find shots in areas that Friedberg had no intention of ever being on camera, bringing the same vibe of scouting and finding the frame in real life.

  • The production used 'quadcapping' (quadruped capture) to work with actors in sensor-dotted suits, where they'd instantly be transformed into four legged animals in their virtual cameras. But these were essentially physical performance actors - the voice actors recorded their lines early in production. The team created a radio edit version of the film and would play the dialogue back on the virtual stage while recording the performances.

  • Jenkins maintained the feel of working in a real, physical location by intentionally keeping certain "mistakes," like camera shakes and lens flares, to create authentic moments that feel naturally captured rather than digitally manufactured (which required pushing back against Disney's default reaction to make everything perfect).

Virtual Muppets?

The article ends with the quoted line about Jenkins wanting to work 'the other way again' with physical sets.

But then he rifts on using the new virtual tools for some other potential ideas.

But at the same time, Jenkins doesn’t rule out the possibility of using Mufasa methods to figure out new ways to make Barry Jenkins movies. Somehow, we land on the subject of the Muppets and suddenly he’s imagining how he could direct puppet performers and transpose them onto virtual sets. “You know, a Muppet movie done in this style would be awesome. Awesome. In the same way we generate our PlayStation version of a scene, you could have a set that’s just the actual physical puppeteers, and Muppets are blocking the scene but just in a black box, you know?” he says. “Or, let’s say, a green box. You’re capturing their performances and then you’re putting them all into virtual sets. I can see how that could work.”

How and Why Barry Jenkins Made ‘Mufasa’ for Disney, Vulture

SPONSOR MESSAGE

Learn AI in 5 Minutes a Day

AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.

Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.

Virtual Production Enhances Aging in "Here"

Robert Zemeckis' "Here" showcases an innovative approach to visual effects by combining virtual production and AI technologies. Noah Kadner, Founder of VirtualProducer.io and Virtual Production Editor at the American Cinematographer Magazine, breaks down this combination in the December issue.

Key Insights

  • The film, based on Richard McGuire's graphic novel, features Tom Hanks and Robin Wright at various ages throughout their characters' lives, utilizing advanced de-aging techniques and LED wall technology to bring the story to life.

  • Visual effects supervisor Kevin Baillie integrated real-time de-aging processes during production, allowing actors to see their younger selves immediately after takes. Baillie explains, "Seeing the younger face makes it instantly clear whether everything's working. That's hard to gauge from raw footage alone."

  • The film employed LED walls with Unreal Engine-generated animations to create dynamic backgrounds visible through windows. Two identical sets with LED walls were constructed to maximize efficiency and prevent downtime.

  • The LED setup featured ROE Visual BP2 2.8mm panels at a 2,464x1,232 resolution, creating a 23'x11'6" screen.

  • Brompton Technology Limited's Tessera SX40 processors and XD distributors managed the LED wall footage.

The Latest in AI

Video Models

Tencent unveils HunyuanVideo, a 13-billion-parameter open-source AI video model.

ByteDance's I2VControl integrates multiple motion control tasks, such as camera movement, object motion, and motion brush effects, to generate videos from images seamlessly.

MagicDriveDiT AI can produce 6 realistic, high-resolution 848×1600, 20-second videos from a single prompt.

LTX released their own video model. It also integrates with ComfyUI 

Kling AI's new Custom Models feature expertly captures key character details in your video, ensuring improved consistency.

Video Tools

Kling AI's Motion Brush lets you control 5 objects at the same time - and it now works with version 1.5

Virtual Try-On from Kling AI offers realistic try-on experiences, allowing users to swap individual clothing items or change both simultaneously, while capturing every detail of the clothing.

Runway's Keyframing prototype is an AI tool that combines precision and serendipity to transform creative processes with features like Image-to-Image transformation and non-linear timelines.

Stacking both CogVideoX Fun Control and Reward LoRAs yields better results. Check out this fight scene in marble style.

3D & AI

Google DeepMind debuts Genie 2, an AI model that generates playable 3D worlds from one single image.

Aiuni AI enables you to become any 3D model that mirrors your precise movements and create custom 3D models using image-to-3D AI.

World Labs demoed an image to 3D model

Meshy AI transforms text and images into a high-quality 3D model, allowing you to choose any angle and animate it with an AI video generator.

CAT4D transforms videos into dynamic 3D scenes with a multi-view video diffusion model, creating 3D models that can be paused and explored from different angles.

RunwayML's new Vid2Vid 20-second videos enable the generation of 3D models using Gaussian Splatting.

Image & Nodes

Nvidia Sana is a text-to-image framework that efficiently generates high-resolution 4096×4096 images with strong text-image alignment, all at less than 1 second per image.

Flux announces image prompts, allowing you to condition your generations with images, for free.

Generate consistent depth maps for your videos with DepthCrafter in ComfyUI.

You can now act out scenes, and ComfyUI transforms them into the final shot, accurately capturing poses and emotions.

BlackForestLabs launches FLUX.1 Tools, a suite of models that enhance control and steerability in FLUX.1, enabling the modification and re-creation of real and generated images.

🍿 The Academy has released its shortlist of 10 films competing for Best Visual Effects Oscar nominations, featuring Alien: Romulus, Beetlejuice 2, Gladiator 2, Godzilla x Kong: The New Empire, Here, Kingdom of the Planet of the Apes, Mufasa: The Lion King, A Quiet Place: Day One, and Spaceman.

📜 The public domain expansion in 2025 will include works by Hemingway, Woolf, Picasso, and Hitchcock, providing media professionals with a treasure trove of creative material to explore and reimagine.

😱 VIVE MARS reveals the BEST HORROR SHOT! winning films, showcasing top talent in the horror genre.

🕹️ Fab announces its latest release, upcoming roadmap, and Quixel updates, including a sneak peek at new content and Bridge enhancements.

👔 Virtual Production Gigs

Technical Program Manager
Mo-Sys Engineering Ltd

📆 Upcoming Events

January 7
CES 2025

February 16 to 20
HPA Tech Retreat 2025

View the full event calendar and submit your own events here.

Thanks for reading VP Land!

Have a link to share or a story idea? Send it here.

Interested in reaching media industry professionals? Advertise with us.

Reply

or to participate.