Ami Luttwak: How AI Is Transforming Cyberattacks

Wiz chief technologist Ami Luttwak on how AI is transforming cyberattacks

Lead: Wiz chief technologist Ami Luttwak on how AI is transforming cyberattacks — that’s the warning and explanation security teams need as organizations rush to adopt AI-driven development and automation. Luttwak, speaking with TechCrunch on Equity, explains how AI speeds delivery but also widens attackers’ opportunities.

Ami Luttwak: How AI Is Transforming Cyberattacks

Image Credits:Jaque Silva/SOPA Images/LightRocket / Getty Images

Why Wiz Chief Technologist Ami Luttwak On How AI Is Transforming Cyberattacks Matters

Ami Luttwak, chief technologist at Wiz, points out that cybersecurity is a “mind game” — and the arrival of AI gives both defenders and attackers new moves. As teams use vibe coding, AI agents, and prompt-based tools, mistakes in implementation can create simple but dangerous openings.

Quick context: Wiz was acquired by Google earlier in 2025 in a high-profile deal, and Luttwak’s role gives him front-row insight into how cloud and AI changes shift risk.

Key Findings From Wiz Tests

  • Vibe-coded apps often had insecure authentication by default.

  • AI agents follow instructions literally; if not told to secure things, they won’t.

  • Attackers are using prompts, vibe coding, and AI agents to scale and automate exploits.

How AI Changes Developer Tradeoffs

Developers use AI to ship faster. That speed improves productivity but introduces new security debt — rushed implementations, copied templates, and missing checks. Luttwak says teams balance speed vs. safety, and often speed wins.

Attackers Level Up With AI

Attackers aren’t just copying developer techniques — they’re building AI-powered toolchains too. According to Luttwak, adversaries now:

  • Use prompts to probe systems and extract secrets.

  • Deploy their own agents to automate lateral movement and scaling.

  • Exploit predictable outputs from common AI tooling and integrations.

Practical Steps Security Teams Can Take

  1. Treat AI outputs as untrusted inputs. Validate and test everything generated by AI agents.

  2. Harden authentication and secrets management. Assume default configurations can be insecure.

  3. Integrate security checks into AI-assisted workflows. Shift-left security into the coding loop.

  4. Simulate attacker prompts. Test how your AI tools respond to maliciously crafted prompts.

What Leaders Should Ask Their Teams

  • Did the AI-generated code include secure authentication by design?

  • Are we logging and monitoring AI agent activity?

  • Have we run red-team tests that use prompts and automated agents?

Key Takeaways

  • AI expands the attack surface but also offers defenders automation advantages.

  • Human oversight remains crucial; AI follows instructions — if you don’t tell it to be secure, it won’t be.

  • Proactive testing and secure-by-default tooling are the fastest way to reduce risk.

“If there’s a new technology wave coming, there are new opportunities for attackers to start using it,” Luttwak told TechCrunch. His point: protecting AI-first systems requires both updated controls and a change in mindset.

Post a Comment

Previous Post Next Post