Code Verification Startup Qodo Raises $70M as AI-Generated Code Floods the Software Industry
Every month, AI coding tools generate billions of lines of code across the global software industry. But here is the uncomfortable truth developers are discovering: faster code does not mean better code. A startup called Qodo is betting its future, and now $70 million in fresh funding, on solving exactly that problem.
| Credit: Yuichiro Chino / Getty Images |
The Growing Crisis Nobody Wants to Talk About
There is a quiet crisis unfolding inside engineering teams at major companies around the world. AI tools promised to speed up software development, and they delivered. But speed introduced a new problem that few anticipated: how do you verify that billions of AI-generated lines of code actually work the way they should?
A recent industry survey revealed something striking. While 95% of developers admit they do not fully trust AI-generated code, only 48% consistently review it before committing it to production systems. That gap between awareness and action is not just a workflow problem. It is a ticking risk buried inside some of the world's most critical software systems.
Qodo, a New York-based startup founded in 2022, has built its entire business around closing that gap. The company develops AI agents focused on code review, testing, and governance. And with a newly announced $70 million Series B round led by Qumra Capital, it now has the resources to go after the problem at enterprise scale.
Why $70 Million Flowed Into a Code Verification Startup
The round brings Qodo's total funding to $120 million, signaling serious investor confidence in the code verification space. Joining Qumra Capital in the Series B are Maor Ventures, Phoenix Venture Partners, S Ventures, Square Peg, Susa Ventures, TLV Partners, Vine Ventures, and notable individual investors including a former OpenAI executive and a Meta board member.
The investor lineup reflects how seriously the technology world is taking AI-generated code quality as an enterprise risk category. When fast-growth companies and legacy institutions alike are deploying AI coding tools at scale, the liability of unchecked output grows with every deployment.
Qodo's pitch to investors is straightforward but technically deep. Most AI review tools look at what changed in a piece of code. Qodo's system analyzes how those changes affect entire software systems, taking into account an organization's historical standards, internal context, and risk tolerance. That distinction matters more than it sounds.
The Founder Who Saw This Coming Before ChatGPT Existed
Itamar Friedman, Qodo's founder and CEO, has an unusual vantage point on this problem. He previously co-founded Visualead and later led the machine vision business at Alibaba after the company acquired Visualead. Before that, he worked at Mellanox, the networking hardware firm later acquired by Nvidia.
It was at Mellanox where something clicked for Friedman. Working on automating hardware verification using machine learning, he noticed that generating systems and verifying systems require fundamentally different approaches, different tools, and an entirely different way of thinking. That insight would eventually become the foundation of Qodo.
By the time he reached Alibaba's research division, he was watching AI evolve rapidly toward systems that could reason over human language. In 2021 and 2022, just ahead of the public release of GPT-3.5, it became clear to him that AI would soon generate a massive share of the world's content, particularly code. He founded Qodo in 2022, months before ChatGPT changed the conversation for everyone else.
The timing proved prescient. Today, Friedman's early conviction that code generation and code verification would require entirely separate systems has become a consensus view inside engineering organizations scrambling to manage the output of AI coding tools.
Why Large Language Models Alone Cannot Solve the Code Quality Problem
Here is where Qodo's approach diverges from much of the AI tooling market. Most AI-powered code review products are built primarily around large language models. Friedman believes that is not sufficient for the quality and governance problem enterprises actually face.
"Quality is subjective," Friedman explained. "It depends on organizational standards, past decisions, and tribal knowledge. An LLM can't fully understand that context. It's like taking a great engineer from one company and asking them to review code at another. They lack the internal context."
That framing captures the real challenge facing engineering leaders right now. A language model trained on public code repositories does not automatically understand why a particular team at a particular company made a specific architectural decision two years ago. But that historical context is precisely what a meaningful code review requires.
Qodo addresses this by building systems that learn each organization's unique definition of code quality over time, factoring in the decisions, standards, and institutional knowledge that no general-purpose model can supply on its own.
The Benchmark That Turned Heads in the Industry
Qodo is not asking the market to take its claims on faith. The company recently topped a competitive code review benchmark called Martian's Code Review Bench, scoring 64.3%. That score placed it more than 10 points ahead of the next competitor in the field, and a full 25 points ahead of a competing AI code review product from a well-known AI lab.
The benchmark specifically tests a system's ability to identify tricky logic bugs and cross-file issues, the kinds of problems that slip past surface-level review and only become visible when you understand how different parts of a system interact. It also measures whether a tool generates useful findings without burying developers in noise, a problem that plagues many automated review systems.
In the past month alone, Qodo launched Qodo 2.0, described as a multi-agent code review system, and released tools that adapt to each enterprise's code quality standards. The product velocity matches the urgency of the market moment.
The Enterprise Clients Validating the Market Thesis
Qodo's customer list adds credibility to the investment thesis. The company is already working with major global enterprises including Nvidia, Walmart, Red Hat, Intuit, and Texas Instruments. High-growth technology firms like Monday.com and JFrog are also among its clients.
That roster spans industries from retail to semiconductor manufacturing to enterprise software, which suggests that the code verification problem is not specific to any one vertical. It is a universal challenge for any organization that has adopted AI coding tools and now needs assurance that what those tools produce is safe to ship.
For enterprises operating in regulated industries or managing critical infrastructure, the stakes around code quality and governance are particularly high. A bug in consumer-facing software is costly. A bug in systems managing financial transactions, semiconductor design, or supply chain logistics can be catastrophic.
A New Phase of AI Software Development Is Arriving
Friedman frames the current moment as a genuine inflection point in how the industry thinks about software development. He traces a clear arc of defining moments, from the arrival of AI code completion tools, to the public launch of conversational AI, to the current era of fully automated task completion.
What comes next, in his view, is a shift from stateless AI systems to stateful ones. Systems that do not just respond to a single prompt but that accumulate context, learn from an organization's history, and develop something that approaches institutional wisdom over time.
"Every year has had a defining moment," Friedman said. "Now we're entering a new phase: moving from stateless AI to stateful systems, from intelligence to artificial wisdom. That's what Qodo is built for."
Whether or not that framing proves accurate, the underlying problem it describes is real and growing. As AI coding tools become standard equipment for engineering teams everywhere, the organizations that figure out how to verify and govern that output at scale will hold a significant advantage over those that do not.
Qodo is making a clear, well-funded bet that verification is not an afterthought to the AI coding revolution. It is the next chapter.
What This Means for the Software Industry
The $70 million raised by Qodo is more than a funding milestone. It is a signal about where enterprise software investment is flowing as the first wave of AI coding adoption matures into something more complex.
The easy part, generating code faster, is largely solved. The harder part, knowing whether that code can be trusted at scale, inside real organizations with real standards and real history, is just beginning to receive the serious investment and engineering attention it deserves.
For developers, engineering leaders, and enterprise technology buyers watching this space, Qodo's trajectory offers an early glimpse of what the next phase of AI-powered software development infrastructure might look like.
The question of how to verify what AI builds is no longer a niche concern. It is becoming one of the defining challenges of software engineering in 2026.