- VP Land
- Posts
- Nant Reveals Their New LED Wall System
Nant Reveals Their New LED Wall System
Pixotope Reveal, New Flux Tools, Runway Expand Video
Welcome to VP Land!
For the complete opposite of AI, we just did a live stream with Blackmagic hardware over SDI.
With some extra cables and a few adapters, we were able to get two-way communication between all the devices - here’s a video breaking down the workflow. Though, pretty soon, this will all be 2110.
NantStudios posted an exciting reveal of their new VP setup - we’ve got the breakdown. Plus, Pixotope has some real-time keying on any background and a bunch of AI updates (as usual).
Let’s dive into it!
Nant’s Dynamic Volume System for VP
NantStudios has introduced the Dynamic Volume System (DVS), a new approach to ICVFX that aims to enhance flexibility and efficiency in virtual production.
This innovative system allows LED volumes to be reconfigured dynamically, catering to specific creative needs across various media sectors, including film, TV, advertising, and e-sports.
Watch the full video on LinkedIn
The Breakdown
DVS uses 'wallPods' with advanced motors, omni-directional wheels, and self-leveling sensors to create adaptable LED volume configurations.
The system can form various LED shapes or split into multiple smaller volumes, and can be moved between stages or into storage.
NantStudios' Design + Deliver division will offer DVS alongside other innovations, leveraging their experience with large-scale virtual production infrastructure.
DVS aims to reduce friction for creative teams and operators by eliminating the need to design scenes for a specific configuration.
The technology offers a potentially more cost-effective solution compared to traditional 'pop-up' volumes and location shooting.
SPONSOR MESSAGE
Learn AI in 5 Minutes a Day
AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.
Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.
Pixotope Reveal: AI-Powered Tool Can Segment Up To 20 People in Real Time
Pixotope has introduced Pixotope Reveal, an AI-powered background segmentation tool that enhances the integration of real-time graphics in live productions.
This new software allows for seamless incorporation of 2D and 3D graphics without the need for green screens or manual rotoscoping, potentially transforming broadcast workflows.
Behind the Scenes
Pixotope Reveal uses machine learning to extract on-screen talent from any background, enabling natural interaction with AR graphics.
The tool supports multiple talent extractions simultaneously, handling over 20 people in a single scene.
It operates at UHD resolution and 60 frames per second, maintaining broadcast-quality results.
The software offers comprehensive video I/O support, including SDI, SMPTE 2110, NDI, and SRT.
Cloud-ready deployment options are available, enhancing flexibility for broadcasters.
Pixotope Reveal can work as a standalone application or integrate into existing Pixotope Graphics workflows.
What’s New in AI
Introducing, Expand Video.
This new feature allows you to transform videos into new aspect ratios by generating new areas around your input video. Expand Video has begun gradually rolling out and will soon be available to everyone.
See below for more examples and results.… x.com/i/web/status/1…
— Runway (@runwayml)
9:36 PM • Nov 22, 2024
Runway just added an expand video feature, rolling out soon.
BlackForestLabs has released FLUX.1 Tools, a suite of four new features that enhance image editing capabilities for their text-to-image model FLUX.1
Flux introduces image prompts, offering a new way to enhance your generative process—free for all users
Vidu-1.5 introduces Multi-Entity Consistency, allowing for seamless integration of various elements in video creation
A new workflow combining RunwayML's Vid2Vid, slow-motion interpolation, and Polycam allows for the generation of detailed 3D models from simple 20-second videos of basic 3D objects
FaceTracker, a new Blender add-on from KeenTools, enables markerless facial motion capture and offers versatile applications for VFX, beauty work, and character animation
VFX & Virtual Production starts at 13:13
👩🏻💻 DaVinci Resolve 19.1 and Fusion Studio 19.1 updates offer significant improvements in multicam, audio, and visual effects workflows, including support for spatial photos and videos.
🤖 Is AI taking over your job? Here’s a perspective.
🌄 Optic8 and NEWTech Prep have collaborated to install a professional-grade LED wall for virtual production training at a high school, potentially setting a new standard for media education.
📚 Final Pixel Academy's concept of "inducation" proposes a deeper integration between film industry practices and educational processes to better prepare students for real-world challenges.
🎦 Check out this AI-powered virtual production workflow that combines AnimateDiff, Blender tracking, and Gaussian splatting for enhanced environment creation in Unreal Engine 5.4.
👔 Virtual Production Gigs
Technical Program Manager
Mo-Sys Engineering Ltd
Production Coordinator / Video Systems Engineer
Phil Galler
Virtual Production Internship
Orbital Studios
📆 Upcoming Events
November 20-23
Flux Festival
November 26
Volumetric Media Interoperability Town Hall
December 8
LDI Conference & Tradeshow 2024
December 10
Tech Me Out: Production Summit
January 7
CES 2025
View the full event calendar and submit your own events here.
Source: Sydney on X
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.
Reply