• VP Land
  • Posts
  • NVIDIA's NIM Microservices Brings Gen AI to Physical Landscapes

NVIDIA's NIM Microservices Brings Gen AI to Physical Landscapes

NVIDIA announced new generative physical AI advancements at SIGGRAPH, including NVIDIA Metropolis reference workflows for building interactive visual AI agents and NVIDIA NIM microservices that help developers train physical machines to handle complex tasks more effectively.

Behind the Scenes

  • Three new fVDB NIM microservices support NVIDIA's deep learning framework for 3D worlds.

  • USD Code, USD Search, and USD Validate NIM microservices work with Universal Scene Description (OpenUSD).

  • NVIDIA's OpenUSD NIM microservices, combined with the first generative AI models for OpenUSD development, enable developers to incorporate generative AI copilots and agents into USD workflows.

  • NIM microservices tailored for physical AI support speech, translation, vision, intelligence, realistic animation, and behavior.

  • Generative AI-powered visual AI agents, built using vision language models (VLMs), are being deployed in hospitals, factories, warehouses, retail stores, airports, and traffic intersections.

Final Take

NVIDIA's generative physical AI advancements are transforming industries by enabling more efficient, safe, and secure operations.

As companies like Foxconn and Pegatron adopt these technologies to streamline their manufacturing processes, the gap between simulation and reality is narrowing. With the help of synthetic data generation, developers can create robust datasets for training physical AI models, leading to more adaptable and better-performing solutions across various industries and use cases.

Reply

or to participate.