• VP Land
  • Posts
  • Genmo's Mochi 1: Open-Source AI Video Model

Genmo's Mochi 1: Open-Source AI Video Model

Genmo has unveiled Mochi 1, a new open-source AI video generation model, while simultaneously announcing a $28.4 million Series A funding round led by NEA.

The model represents a notable advance in AI video creation, featuring strong prompt adherence and improved motion quality in generated content.

The Breakdown

  • The initial release generates 480p video content with plans for HD (720p) capabilities by year-end

  • Built on a novel Asymmetric Diffusion Transformer architecture, Mochi 1 is the largest openly released video generative model with 10 billion parameters

  • The model is available under the Apache 2.0 license, making it free for both personal and commercial use through their playground at genmo.ai/play

  • Technical innovations include a video compression system that reduces file sizes by 128x and full 3D attention processing over 44,520 video tokens

Key Limitations

  • Currently optimized for photorealistic content, not animated styles

  • Some edge cases may show warping and distortions with extreme motion

  • Resolution limited to 480p in the preview version

Final Take

This release marks a significant shift in the AI video generation landscape by making advanced capabilities openly accessible to creators and developers. The combination of substantial funding and open-source approach suggests a growing trend toward democratizing AI video tools, potentially enabling smaller studios and independent creators to access high-quality video generation capabilities.

Future developments, including planned image-to-video features and enhanced motion control, could further reshape video production workflows.

Reply

or to participate.