GPU Power Crisis: How One Startup Plans to Stop AI's Biggest Hidden Waste
Every second that an AI data center runs, electricity is being thrown away. Not a little — up to 30% of the power feeding some of the world's most expensive computer chips simply vanishes due to inefficiencies that no one has fully solved. That is the problem a new Israeli startup is now stepping forward to fix, and the timing could not be more urgent.
| Credit: Niv-AI |
The Hidden Problem Costing AI Companies Billions
Most people outside the data center industry do not realize that running powerful AI chips is not just a matter of plugging them in and letting them run. Modern GPUs — the processors that train and run advanced AI models — create rapid, unpredictable surges in power demand. These surges happen at the millisecond scale, faster than most power management systems can detect or respond to.
When thousands of GPUs work together on tasks like training a large language model, they constantly shift between heavy computation and communication with neighboring chips. Each switch triggers a small spike in power demand. Multiply that by tens of thousands of chips operating simultaneously, and data center operators face a power environment that swings wildly from moment to moment.
To cope, many operators either pay for expensive temporary energy storage to absorb the surges, or they deliberately slow down their GPU clusters to reduce the risk of drawing more power than they have available. Both approaches waste money. The chips cost tens of thousands of dollars each, and throttling them means operators are paying for performance they never actually get.
"There is so much power squandered in these AI factories," said Jensen Huang, CEO of Nvidia, during a keynote at the company's annual GTC conference. The company went further, declaring that every unused watt represents revenue lost. That message resonated across an industry already under pressure to justify the enormous capital costs of AI infrastructure.
What Niv-AI Is Actually Building
The founding team at Niv-AI — CEO Tomer Timor and CTO Edward Kizis — identified this problem as one that required fundamentally new measurement tools before any software fix could work. You cannot manage what you cannot accurately see, and existing data center power monitoring was simply not granular enough.
The company's first product is a set of rack-level sensors capable of detecting GPU power usage at millisecond intervals. These sensors are being deployed on GPU hardware that Niv-AI owns outright, as well as with design partners who are helping validate the technology in real operating environments. The goal at this stage is not yet to fix anything — it is to build the most detailed picture of AI workload power behavior that has ever existed.
By collecting data across different types of deep learning tasks, the team aims to map the specific power signatures of different AI operations. Training a model looks different from running inference. Communicating between chips looks different from performing matrix multiplications. Understanding those distinctions at granular detail is what will make future management tools actually effective.
Once that data foundation is in place, Niv-AI plans to build an AI model of its own — a system that can predict power demand spikes before they happen and coordinate load distribution across an entire data center. The team describes it as a copilot for data center engineers, something that works alongside human operators rather than replacing their judgment.
Why the Investors Are Paying Attention
Niv-AI's $12 million seed round drew backing from a notable roster of investors, including Glilot Capital, Grove Ventures, Arc VC, Encoded VC, Leap Forward, and Aurora Capital Partners. The company has not disclosed its valuation.
Lior Handelsman, a partner at Grove Ventures and a board member at Niv-AI, put the problem in stark terms: "We just can't continue building data centers the way we build them now." That statement carries weight coming from someone sitting at the intersection of infrastructure investment and technology. It signals that the smart money is not just betting on faster chips — it is betting on the systems that make existing chips work better.
The investment also reflects a broader shift in how the AI industry is thinking about its physical infrastructure. For several years, the dominant narrative was that more compute equals better AI. That is still true to a degree, but the marginal returns on raw hardware investment are beginning to collide with hard physical limits — including the limits of power grids, cooling systems, and the basic economics of energy. Startups that can squeeze more output from existing infrastructure are becoming increasingly attractive.
AI's Electricity Problem Is Accelerating
The power challenge Niv-AI is targeting is not a niche technical issue. It sits at the center of one of the most consequential infrastructure buildouts in modern history. Data centers now consume a significant and rapidly growing share of global electricity, and AI workloads are the fastest-growing segment of that demand.
Grid operators around the world are struggling to keep up. In some regions, new data center construction is being delayed or rejected because local power infrastructure simply cannot support the load. In others, operators are paying premium prices for energy during peak hours, directly cutting into the economics of AI development.
Throttling GPU performance by 30% is not just a financial inefficiency — it means AI models take longer to train, inference is slower, and the overall productivity of the entire AI industry is reduced by a factor that no one would accept if they fully accounted for it. Solving even a portion of that waste could have compounding effects across every organization running large-scale AI infrastructure.
What Comes Next for Niv-AI
The company is targeting deployment of an operational system in a handful of United States data centers as its next near-term milestone. That transition — from sensor deployment and data collection to an actively managed, AI-driven power optimization system — represents the critical proof point that investors and potential customers will be watching closely.
If the copilot model works at scale, the implications extend well beyond simple cost savings. Data centers that can manage power more precisely may be able to increase the density of GPUs they operate without requiring more grid capacity. That could meaningfully expand effective AI compute without requiring new physical infrastructure, a prospect that has obvious appeal in a market where construction timelines and power availability are already becoming bottlenecks.
The startup is still in its earliest stages, and the path from sensor data to a production-grade AI power management system is long. But the problem Niv-AI is solving is real, the cost of not solving it is measurable, and the team has the backing and the technical foundation to take a serious run at it. In an industry accustomed to betting on the next chip generation, a company betting on how to actually use the chips you already have is a genuinely different kind of bet — and one that the AI infrastructure world increasingly needs someone to win.