Skip to content

AI Joins the Attack: The Nx NPM Package Hack and How Zero Trust Can Stop It

Introduction

In late August 2025, developers were shaken by a software supply-chain attack on Nx, a popular open-source monorepo toolkit for JavaScript/TypeScript. Malicious code was slipped into several widely used Nx packages on npm – packages that see roughly 6 million downloads a week. This breach was especially alarming not just for its scale, but because the attackers augmented their malware with AI.

In this blog, we'll break down what happened in the Nx hack, why this AI-assisted attack is a wake-up call for software developers and IT professionals, and how a Zero-Trust, least-privilege approach to software execution can proactively defend against such novel threats.

The Nx NPM Package Compromise: What Happened?

On August 27, 2025, multiple npm packages in the @nx scope were found to contain malicious code. The attackers managed to publish rogue versions without the maintainers’ knowledge. Because Nx is commonly used in build pipelines and development workflows, the potential impact was enormous.

What the Malware Did

The infected versions contained a telemetry.js file that executed automatically upon install. Once triggered, it:

  • Scanned for secrets like SSH keys, crypto wallets, and environment variables.
  • Harvested developer credentials from .npmrc, GitHub CLI, and other config files.
  • Exfiltrated data by creating public GitHub repos in victims’ accounts and uploading the stolen data there.
  • Tampered with systems by appending sudo shutdown -h 0 to shell startup files, causing machines to shut down on login.
  • Invoked local AI tools (Claude, Gemini, OpenAI CLI) with crafted prompts to search for even more secrets, effectively trying to use AI as a co-conspirator.

This was the first time malware was observed actively attempting to leverage AI/LLMs to expand its capabilities.

The Fallout

At one point, over 1,400 malicious repos were created on GitHub to store exfiltrated data. GitHub intervened, but by then secrets were already exposed. Developers scrambled to revoke tokens, clean their environments, and remove sabotage commands.

AI-Powered Malware: A New Breed of Threat

The Nx incident revealed a dangerous new pattern: malware enhanced by AI. By invoking AI tools, attackers attempted to automate and broaden their theft beyond what static code alone could achieve. Security experts warn that this is a preview of what’s to come: malware that adapts, evades, and innovates in real time.

Traditional supply-chain risks are amplified when AI enters the picture. With attackers automating discovery and exfiltration, the blast radius grows faster than defenders can react. The Nx breach may have been noisy, but the next one could be quiet and devastating.

Why Proxies Can’t Stop AI Misuse

It’s tempting to think network proxies or API filters can prevent misuse of AI. In reality, they cannot.

  1. Prompt Obfuscation – Attackers can encode or disguise prompts so filters won’t catch them. Even advanced scanning can be bypassed.
  2. Local AI Models – AI models are shrinking and running directly on endpoints. With access to GPUs or Apple’s Neural Engine, malware can use local AI to search, encrypt, or exfiltrate to approved cloud resources — all invisible to network monitoring.

The Nx attackers used keys for Gemini and Claude cloud tools. The next wave of hackers won’t need the cloud at all — they’ll run compact models locally, turning AI hardware into undetectable hacking engines.

The battlefield is the endpoint. Stopping misuse means controlling what runs locally — executables, DLLs, scripts, and packages, because proxies are blind to these software execution activities.

AI at the Endpoint: The Real Battlefield

This is why WCS focuses on the endpoint. Our Zero-Trust EPP enforces least-privilege software execution by default.

  • Unknown AI-generated polymorphic malware is blocked, because its unique handprints will never be on the Approve-Control-List (ACL). WCS blocks AI malware.
  • On a dev machine, WCS Script Protection (enabled by default) would have blocked telemetry.js immediately.
  • Even if a user considered approving it, the Vault feature would have captured its contents first, enabling safe review before any approval.

Of course, managing ACLs requires discipline. But with WCS, even the most advanced AI-augmented malware gets stopped at the gate.

Preventing the Next Attack with Zero-Trust and Least-Privilege Execution

When facing AI-driven threats, defenders can’t rely on predicting every new attack vector. Instead, we must enforce a default-deny policy: only approved applications can run.

  • Unknown = Unexecuted – If it’s not explicitly approved, it never runs.
  • Containment – Even trusted apps can’t launch unapproved code.
  • Blocking Tool Abuse – Malware can’t hijack other CLIs or processes if they’re not allowed by policy.
  • Proactive Protection – Zero-Trust EPP protects against threats not yet invented, by enforcing least privilege today.

With the rapid advance of AI, there's no telling what attack vectors AI will enable in 5 years, next year or even tomorrow. BUT you do know which software needs to be approved to accomplish your team's goal each day. Our least-privilege default-deny prevents tomorrow's attacks right now!, ziggy, CEO & Founder, White Cloud Security, Inc.

Conclusion

The Nx hack marks a turning point: AI is no longer just a productivity tool, it’s being weaponized by attackers. Proxies and network filters won’t stop this evolution. Only endpoint-level control — deciding what can and cannot run — can close the door on AI-driven malware.

WCS gives developers and IT teams exactly that: Zero-Trust, least-privilege softwre control that blocks unauthorized code, captures unknowns for review, and prevents tomorrow’s attacks today.