30% of data center power sits unused — this startup just raised $12M to unlock it
Snaplyze Digest
Tech Products intermediate 3 min read Mar 18, 2026 Updated Mar 20, 2026

30% of data center power sits unused — this startup just raised $12M to unlock it

“Data centers waste 30% of their power capacity because GPU spikes are too fast to predict — Niv-AI just raised $12M to fix it.”

In Short

Data centers leave up to 30% of their contracted power permanently stranded because GPU power spikes are too unpredictable to manage safely. Niv-AI just exited stealth with $12M to capture the unique 'electrical fingerprint' of AI workloads using millisecond-level sensors, then orchestrate power in real-time. The goal: reclaim stranded capacity without adding physical hardware. Founded by Israeli engineers, backed by Glilot Capital and Grove Ventures, with operational systems expected in US data centers within 6-8 months.

ai-infrastructuredata-centergpuenergyhardware
Why It Matters
The practical pain point this digest is really about.

You know that feeling when you provision expensive GPU clusters but your data center tells you to throttle back because of 'power constraints'? Here's what's actually happening: modern GPUs create violent, millisecond-level power surges as they switch between computation and communication. Data centers can't predict these spikes, so they assume the worst-case scenario and heavily buffer power usage. Before: you pay for 100MW but can only safely use 70MW because the grid can't handle surprise surges. Now: Niv-AI maps your workload's electrical fingerprint and orchestrates power in real-time, unlocking that stranded 30%.

How It Works
The mechanism, architecture, or workflow behind it.

Think of it like a heart monitor for your data center. Niv-AI deploys high-resolution sensors at the rack level that capture power usage at millisecond granularity — standard facility meters completely miss these rapid transients. This data reveals the unique 'electrical fingerprint' of different AI workloads: training GPT-4 looks different from running inference on Llama. An AI model then learns to predict these patterns and synchronize power loads across the facility. Instead of buffering for worst-case spikes, the system actively orchestrates compute to smooth out demand — like a conductor coordinating instruments so the orchestra never gets too loud for the concert hall.

Key Takeaways
7 fast bullets that make the core value obvious.
  • Millisecond-level power sensing — why YOU care: Standard meters sample every few seconds and miss the violent transients that trip circuits. Niv-AI's rack-level sensors capture data 1,000+ times per second, revealing pa...
  • Electrical fingerprinting — why YOU care: Different AI workloads have distinct power signatures. Training creates different spikes than inference. Knowing these patterns lets you schedule workloads to avoid simultaneous...
  • Real-time orchestration — why YOU care: Instead of passively monitoring, the system actively coordinates when workloads draw power. Think load balancing, but for electricity instead of network traffic.
  • AI-powered prediction — why YOU care: The system builds models on collected data to forecast power demand. This moves you from reactive (buffer for worst case) to proactive (schedule around predicted patterns).
  • No hardware addition required — why YOU care: The value comes from intelligence, not infrastructure. You're unlocking capacity you already paid for, not buying new equipment.
  • Grid-friendly power profiles — why YOU care: Utilities hate unpredictable loads. Niv-AI creates smoother demand curves, which could mean better rates and fewer grid interconnection headaches.
  • Copilot for data center engineers — why YOU care: The founders envision an AI assistant that helps engineers make real-time decisions about workload placement and power management.
Should You Care?
Audience fit, decision signal, and the original source in one place.

Who It Is For

If you're a data center operator running GPU clusters for AI training or inference — this is directly for you. Also relevant for ML infrastructure leads at companies building their own AI capacity, or anyone whose GPU utilization is bottlenecked by power constraints rather than compute. Not useful yet if you're running small-scale experiments in the cloud where power management is abstracted away.

Worth Exploring?

Yes — this addresses a real, expensive problem that every hyperscaler faces. The 30% stranded capacity number comes from actual operational constraints, not marketing fluff. The team combines low-level kernel developers, electrical engineers, and algorithm experts — exactly the mix needed for this problem. The main caveat: they're 6-8 months from operational systems in US data centers, so this is early-stage. But if you're planning data center buildouts or frustrated by power constraints, get on their radar now.

View original source
What the full digest unlocks

There is more here than the public preview.

This page gives you the hook. The full Snaplyze digest goes deeper so you can move from curiosity to decision with less noise.

Open the full digest to read the deeper breakdown, compare viewpoints, and get the practical next-step playbooks.

Open the full digest

Snaplyze

Go beyond the preview

Read the full digest for deep-dive insight, Easy Mode, Pro Mode, and practical playbooks you can actually use.

Install Snaplyze