- VP Land
- Posts
- AI Video Search & M4 Mini Tests
AI Video Search & M4 Mini Tests
Jumper, Skyglass, Canon VR
Powered by
Welcome to VP Land!
We’ve got an interesting new utility AI tool that just launched: Jumper.
Lots of AI model updates, how the M4 Mini computer holds up, plus a new VR lens from Canon.
And a merging of virtual production and AI - Skyglass now works with gaussian splats so you can scan a scene (or download one) and film with your talent in this virtual environment.
Let’s dive into it!
New AI Video Search Tool: Jumper
Witchcraft Software launches Jumper, a new AI-powered search engine for video footage that operates entirely on-device. This tool aims to significantly improve how editors find and access content within their projects, addressing long-standing needs in the industry for efficient footage searching.
Behind the Scenes
Jumper works locally without cloud uploads, addressing privacy concerns and slow internet issues.
The software integrates with popular NLEs like Final Cut Pro and Adobe Premiere, with plans for DaVinci Resolve and Avid Media Composer.
It can search for specific elements in footage, such as "guy wearing green t-shirt," streamlining the editing process.
Developed by YouTube creator Arthur Moore and a team of developers, with input from industry professionals.
An enterprise version is in development for server-based analysis and multi-client access.
Jumper supports Multicam and Synchronized Clips, a feature added based on beta tester feedback.
SPONSOR MESSAGE
Descript is an easy to use text based video editor for podcasts and talking head videos.
But we've been using it as part of our longer video editing process with Resolve: paper edits.
Sorting through hours of interview footage and building out a radio edit is a pain.
With Descript, I edit videos as quickly as editing a document. I import my footage, get instant transcriptions, and search through hours of content in seconds.
When I hear something I like, I highlight the text and add it to my rough-cut composition. Then I build everything out, copy and paste soundbites like I would with a text document - but the video updates too.
I can also add temp VO, scene notes, and comments. The whole team can be on the same project, working on this in real-time.
When we're done, we export an XML from Descript and bring it into our NLE of choice.
Yes, most editing apps have added some transcription support, but IMO, none of them come close to the speed and ease of use in working with text in Descript.
How’s the M4 Mac Mini for editing? AI?
Scott Simmons at Pro Video Coalition put Apple's new M4 Mac mini through extensive real-world testing, revealing it as a compelling option for video professionals seeking power without the Mac Studio premium.
The ultra-compact machine — just one-fourth the size of a Mac Studio — impressed with:
Silent operation during intensive tasks
Strong ProRes workflow performance
Capable 8K RED footage handling
New Thunderbolt 5 ports for future expansion.
While it lacks the multiple media engines found in Max and Ultra chips (resulting in slower H.264 encoding on longer exports), its $2,200 price tag for a well-equipped model (64GB RAM, 1TB SSD) offers significant value compared to the $2,600 M2 Mac Studio.
Simmons found the controversial bottom-mounted power button to be a non-issue in practice, though editors should budget for a USB hub to complement the limited ports.
For offline editing, corporate video production, or as a second edit suite machine, the M4 Mac mini delivers professional-grade performance in a remarkably compact and quiet package.
AI Model Training
The M4 Mac Mini is reportedly well-suited for AI model training. Alex Cheema shared his experience using the M4 Mac Mini for AI training on X (and it can be clustered).
How much faster is the new MacBook Pro for AI inference?
M4 Max is 27% faster with 72 tok/sec compared to 56 tok/sec of the M3 Max with MLX running Gemma 2 9B (4bit).
The 27% speedup is the same with Llama-3.2-1b, Llama-3.2-3b and others. Next up: @exolabs M4 cluster.
— Alex Cheema - e/acc (@alexocheema)
1:36 PM • Nov 8, 2024
Canon’s new VR Lens (and prototype camera)
Canon has announced the RF-S7.8mm F4 STM DUAL lens, a new tool for VR and 3D content creators. This lens, compatible with the Canon EOS R7 camera, aims to simplify the process of capturing immersive content for social media creators and enthusiast videographers.
Behind the Scenes
The lens offers a 7.8mm focal length and 60-degree angle of view, ideal for capturing detailed 3D content
It operates similarly to traditional 2D RF-mount lenses, making it user-friendly for newcomers to VR production
Users can convert footage to various 3D formats using the EOS VR Plug-in for Adobe Premiere Pro or EOS VR Utility software
The lens is designed to work with devices like Apple Vision Pro and Meta Quest 3
High-speed autofocus and high-resolution image sensor, combined with Canon's color science, ensure high-quality VR and spatial video capture
Priced at $449.99, it's set to launch in November 2024
Canon also teased a prototype 250MP camera at the 2024 Chinese CPSE show, potentially offering unprecedented image detail for future productions.
AI Tools
MuVi framework offers enhanced cohesion between video and music by analyzing visual content and generating synchronized audio.
Addy Ghani displays how virtual production techniques can be enhanced in post-production using AI tools like Runway's Video to Video Gen3 Alpha.
The analysis of fair use for generative AI is complex, involving considerations of market impact, transformative use, and information theory to quantify the amount of copyrighted material used.
The production of "Con Job" demonstrates how integrating AI into filmmaking workflows can enhance creativity and efficiency in areas like set design and prop creation.
Moondream secures $4.5 million in funding to develop a compact, efficient vision-language AI model that performs comparably to much larger models while running locally on devices.
Google is developing "Project Jarvis," an AI system that can perform web-based tasks for users through a browser interface.
We’re doubling down on real humans, real camera control, and now—real places!
Because AI is bad at understanding exactly what you want from a text prompt.
You can capture 3D scans of the real world with something like @LumaLabsAI to generate a Gaussian splat.
Then, simply… x.com/i/web/status/1…
— Skyglass (@skyglassapp)
6:53 PM • Nov 8, 2024
🎬 Ridley Scott is exploring AI applications in animation to potentially reduce production time and costs.
👵🏻 Robert Zemeckis' new film "Here" utilizes advanced AI-assisted aging and de-aging technology to depict characters across multiple life stages
🏀 A large-scale video project titled "Summon the Wave" aims to elevate the LA Clippers' brand storytelling at the new Intuit Dome.
👔 Virtual Production Gigs
Technical Program Manager
Mo-Sys Engineering Ltd
Production Coordinator / Video Systems Engineer
Phil Galler
Virtual Production Internship
Orbital Studios
Virtual Production Technician
Film Post
📆 Upcoming Events
November 13
Final Cut Pro Creative Summit 2024
November 14
Content Creator Day at Hot Rod Cameras
November 20-23
Flux Festival
November 26
Volumetric Media Interoperability Town Hall
December 8
LDI Conference & Tradeshow 2024
View the full event calendar and submit your own events here.
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.
Reply