Disney, DeepMind, and NVIDIA just open-sourced a GPU physics engine for robots
Snaplyze Digest
GitHub Repos intermediate 3 min read Mar 19, 2026 Updated Mar 20, 2026

Disney, DeepMind, and NVIDIA just open-sourced a GPU physics engine for robots

“Three tech giants just open-sourced the physics engine they use to train robots — and it runs on your gaming GPU.”

In Short

Newton hit 3,000 GitHub stars after its v1.0.0 release on March 10, 2026 — a GPU-accelerated physics engine backed by Disney Research, Google DeepMind, and NVIDIA that runs robot simulations at scale. It builds on NVIDIA Warp and integrates MuJoCo Warp as its primary solver, giving you differentiable physics, cloth/cable/softbody simulation, and OpenUSD support in one Apache-2.0 licensed package. The Linux Foundation hosts it, meaning it's built for long-term community maintenance, not a corporate experiment that gets abandoned.

roboticsphysics-simulationgpunvidia-warpmujoco
Why It Matters
The practical pain point this digest is really about.

You know that feeling when you want to train a robot in simulation but MuJoCo is too slow for parallel environments, Isaac Sim requires an enterprise license, and PyBullet doesn't do differentiable physics? Before: you pick between speed, accuracy, and gradient-based learning — rarely get all three. Newton gives you GPU-accelerated simulation with multiple solvers (MuJoCo, XPBD, VBD, Featherstone), differentiable physics for ML pipelines, and cloth/cable/softbody support that most robotics simulators ignore entirely.

How It Works
The mechanism, architecture, or workflow behind it.

Think of it like a universal adapter for physics simulation. Newton sits on top of NVIDIA Warp (a GPU compute framework) and provides multiple physics solvers you can swap out. You build a model using the Python API or import from URDF/MJCF/USD files. The ModelBuilder creates the simulation world, then you pick your solver — MuJoCo for rigid body dynamics, XPBD for particles and soft bodies, VBD for cloth. Each solver runs on GPU, so you can simulate thousands of environments in parallel. The differentiable design means gradients flow through the physics step, letting you train neural networks directly from simulation data.

Key Takeaways
7 fast bullets that make the core value obvious.
  • Multiple solver backends — why YOU care: Swap between MuJoCo, XPBD, VBD, Featherstone, and SemiImplicit solvers depending on your simulation needs. Rigid bodies? MuJoCo. Cloth and soft bodies? XPBD/VBD. One API, multipl...
  • Differentiable simulation — why YOU care: Gradients flow through the physics step, enabling end-to-end training of neural network policies. Train robot controllers with backpropagation through time, not just reinforceme...
  • Cloth, cable, and softbody support — why YOU care: Most robotics simulators only handle rigid bodies. Newton simulates deformables out of the box — essential for manipulation tasks, wearable robots, and cable management...
  • OpenUSD import/export — why YOU care: Pixar's Universal Scene Description is becoming the standard for 3D asset interchange. Load production assets directly, export simulations for rendering in Omniverse or Blender.
  • GPU-accelerated parallel simulation — why YOU care: Run thousands of environments simultaneously on a single GPU. Train reinforcement learning policies in hours instead of days. Warp handles the CUDA kernel compilation ...
  • Inverse kinematics built-in — why YOU care: Compute joint configurations to reach target poses without external IK libraries. Works with the same model you use for dynamics simulation.
  • Sensor simulation — why YOU care: Contact sensors, IMU, and tiled camera sensors generate realistic observations for training perception and control policies. Sim-to-real transfer starts with realistic sensor data.
Should You Care?
Audience fit, decision signal, and the original source in one place.

Who It Is For

If you're a robotics researcher or engineer doing reinforcement learning, sim-to-real transfer, or soft body manipulation — this is for you. Especially valuable if you need parallel GPU simulation, differentiable physics, or deformable object simulation that MuJoCo alone doesn't provide. Not useful yet if you don't have an NVIDIA GPU (CPU mode exists but is slower) or if you only need simple rigi...

Worth Exploring?

Yes, absolutely try it. The backing by Disney Research, DeepMind, and NVIDIA plus Linux Foundation hosting signals this isn't a side project — it's infrastructure. The v1.0.0 release is production-ready with 50+ examples covering robots, cloth, cables, soft bodies, and differentiable simulation. The main catch: NVIDIA GPU required for performance (CPU works but defeats the parallelism purpose), and the project is young with 145 open issues indicating active development and rough edges.

View original source
What the full digest unlocks

There is more here than the public preview.

This page gives you the hook. The full Snaplyze digest goes deeper so you can move from curiosity to decision with less noise.

Open the full digest to read the deeper breakdown, compare viewpoints, and get the practical next-step playbooks.

Open the full digest

Snaplyze

Go beyond the preview

Read the full digest for deep-dive insight, Easy Mode, Pro Mode, and practical playbooks you can actually use.

Install Snaplyze