Back to Blog

Axios NPM Hack and Claude Code Leak: What Your Engineering Team Needs to Know

808CybersecurityTech Trends

The Axios npm package got backdoored on March 31, 2026 by North Korean hackers, and hours later Anthropic accidentally leaked the entire Claude Code source code through the same npm registry. Both events hit the JavaScript ecosystem on the same day. If your company runs Node.js in production, or if your developers use AI coding tools, you probably need to check whether you’re exposed. And then you need to have a harder conversation about whether your team has the security depth to handle what’s coming next.

We staff IT and engineering teams at KORE1. Cybersecurity roles, DevSecOps, full-stack, infrastructure. The morning after the Axios compromise went public, three separate clients called us asking the same question. Not “what happened.” They already knew what happened. The question was “do we have the right people to make sure this doesn’t wreck us next time?” For two of the three, the honest answer was no.

Software development team responding to npm supply chain attack security alert in modern office

What Is a Software Supply Chain Attack?

A software supply chain attack is when an attacker compromises a trusted third-party component, like an open-source library or package, so that every application depending on that component gets infected automatically without the developer doing anything unusual or making any mistake beyond trusting a package they’ve trusted a hundred times before. Instead of targeting your application directly, the attacker poisons something your application already trusts. Every developer who installs or updates that component pulls in the malicious code without knowing it.

Sounds academic until it lands in your build pipeline on a Monday morning. Then it’s a very specific kind of emergency that most teams are not staffed to handle.

The Axios NPM Attack: What Actually Happened

On March 31, 2026, a threat actor took over the npm account of jasonsaayman, one of the lead maintainers of Axios. Axios is a JavaScript HTTP client library. Simple description, enormous footprint. About 100 million downloads per week. Present in roughly 80% of cloud and code environments worldwide.

The attacker published two poisoned versions. axios@1.14.1 tagged as latest, and axios@0.30.4 tagged as legacy. Both pointed to compromised code. Any developer or CI/CD pipeline that ran npm install during a roughly three-hour window pulled a backdoored release.

Thirty-nine minutes. That’s how long the attacker needed to publish both versions.

The malicious versions introduced a dependency called plain-crypto-js, a package that didn’t exist before that day. It had one job. Run a postinstall script that silently downloaded and executed a cross-platform Remote Access Trojan. Windows, macOS, Linux. All three. The RAT gave attackers persistent access to whatever machine it landed on, which in many cases meant developer laptops with SSH keys, cloud credentials, API tokens, and database passwords sitting in environment files and credential stores.

Google’s Threat Intelligence Group attributed the attack to UNC1069, a financially motivated North Korean threat actor that’s been active since at least 2018. The malware deployed was WAVESHAPER.V2, an updated version of a backdoor previously linked to the same group.

The compromised versions were pulled within a few hours. But 3% of affected environments showed evidence of actual execution. When the denominator is 100 million weekly downloads, 3% is not a small number.

The Claude Code Source Leak: Same Day, Same Registry

While the security community was still processing the Axios compromise, a different kind of npm incident was unfolding. An intern at Solayer Labs named Chaofan Shou noticed something odd in version 2.1.88 of the @anthropic-ai/claude-code package. A 59.8 MB source map file, the kind used for internal debugging, had been published to the public npm registry by mistake.

That file contained the entire Claude Code codebase. Nearly 2,000 TypeScript files. Over 512,000 lines of code. Everything Anthropic built to power one of the most widely used AI coding assistants in the world, sitting on npm for anyone to download and read.

Within hours the code was mirrored to GitHub, forked thousands of times, and turned into the kind of open-source archaeology project that the developer community absolutely loves, where people race to be the first to document every hidden feature and internal decision they can find. Last count, the repo had over 84,000 stars and 82,000 forks.

What did people find inside?

A feature called KAIROS, referenced more than 150 times in the source. It’s an autonomous daemon mode that lets Claude Code run as an always-on background agent, performing what the code describes as “memory consolidation” while the developer is idle. Writing observation logs. Acting on things it notices without being prompted. There’s a 15-second blocking budget to prevent it from interrupting workflow with anything slow.

Internal model codenames. Capybara maps to a Claude 4.6 variant. Fennec maps to Opus 4.6. An unreleased model called Numbat is still in testing.

And something called “Undercover Mode,” which appears to enable stealth contributions to public open-source repositories.

Anthropic confirmed it was human error. A packaging mistake, not a breach. But it came just days after a separate incident where the company accidentally revealed details about an internal project called Mythos. Two leaks in five days from a company that positions itself as the “safety-focused” AI lab. The Register didn’t hold back on the irony.

Developer laptop showing npm terminal and JavaScript code during Axios supply chain attack investigation

Where These Two Incidents Collide

Here’s the part that should genuinely worry engineering leaders.

The Axios attack window started at 00:21 UTC on March 31. The compromised versions were pulled by 03:29 UTC. Anyone who installed or updated Claude Code via npm during that window may have pulled the backdoored Axios as a transitive dependency. Two separate incidents on the same registry, same day, compounding each other’s blast radius.

Anthropic now recommends using their native installer instead of npm specifically because of this. The native installer uses a standalone binary that doesn’t touch the npm dependency chain at all.

That recommendation tells you something about the current state of npm trust, and about how even the companies building tools on top of the ecosystem have started treating the registry itself as a risk surface rather than a reliable distribution channel they can depend on without thinking about it.

What Your Team Should Do Right Now

Practical steps. Not theory.

Check your lockfiles today. Search package-lock.json and yarn.lock for axios@1.14.1, axios@0.30.4, or plain-crypto-js@4.2.1. If any of those appear, stop reading this and start your incident response plan. The system is compromised.

If you find a compromised version, don’t try to clean the machine. Rebuild from a known-clean snapshot or base image, because the RAT establishes persistence and trying to surgically remove malware when you don’t know every hook it installed is a gamble you shouldn’t take with production infrastructure. Rotation means rotating everything, and that’s npm tokens, cloud access keys, SSH keys, database credentials, GitHub tokens, anything the compromised system could have touched during the time it was running the backdoor. Revoke and reissue. Not rotate in place.

Block the command-and-control infrastructure. Domain is sfrclak.com, IP is 142.11.206.73 on port 8000. If you see any outbound traffic to either, you’ve confirmed execution.

Review CI/CD pipeline logs. Any npm install that ran between 00:21 and 03:15 UTC on March 31 could have pulled the compromised version. Build servers and CI runners are high-value targets because they typically have elevated permissions and access to deployment credentials.

IndicatorTypeWhat It Means
axios@1.14.1Compromised packageBackdoored latest version, pulled by default on npm install
axios@0.30.4Compromised packageBackdoored legacy version for older codebases
plain-crypto-js@4.2.1Malicious dependencyPhantom package, didn’t exist before the attack, delivers the RAT
sfrclak.comC2 domainCommand-and-control server for the RAT payload
142.11.206.73:8000C2 IPDirect IP for payload delivery and communication

Longer-Term Fixes That Actually Work

Pin exact versions. Stop using caret ranges like ^1.14.0. Every dependency should be an exact version number in your package.json, and your lockfile should be committed to version control. This alone would have prevented automatic installation of the compromised Axios release for any project that already had a clean lockfile committed.

Use npm ci in every pipeline. Not npm install. The difference matters. npm ci respects the lockfile exactly. npm install can resolve to newer versions if the range allows it. One of those commands protects you from this attack. The other one doesn’t.

Disable postinstall scripts in CI. The entire Axios attack depended on a postinstall hook running automatically during installation. If your build pipeline runs npm install --ignore-scripts or uses pnpm v10 which blocks automatic postinstall execution by default, this attack vector doesn’t work. Period.

Implement a cooldown policy. SANS Institute and multiple security vendors recommend rejecting packages published within the last 72 hours. The Axios compromise lasted about three hours. A three-day cooldown policy would have blocked it entirely, with room to spare.

Not every one of these fixes requires a security engineer to implement. But someone on your team needs to understand why each one matters, know which ones apply to your specific stack, and be empowered to enforce them even when they slow down a sprint. That person is usually either a DevSecOps engineer or a senior developer with security expertise. If you don’t have one, you felt that gap on March 31.

Cybersecurity engineer at security operations center workstation monitoring threat dashboards

The Talent Problem Behind the Technical Problem

Both of these incidents, the Axios hack and the Claude Code leak, are fundamentally people problems.

A single maintainer’s stolen credentials gave North Korean hackers access to a package used by 80% of cloud environments. A packaging oversight at one of the most well-funded AI companies in the world exposed half a million lines of proprietary source code. No amount of tooling prevents a stolen token from being used by an attacker who has it. No CI/CD pipeline catches a source map file that was never supposed to be excluded but was.

The gap is people. Specifically, it’s people with security expertise embedded in engineering teams, not just siloed in a separate security org that reviews things after the fact.

And the data makes it worse. The global cybersecurity workforce gap hit 4.8 million unfilled positions in 2025, which represents a 19% jump in a single year while the actual workforce barely moved at 0.1% growth, meaning the hole is getting deeper a lot faster than anyone is filling it. In the U.S. alone, roughly 700,000 cybersecurity roles sit open right now. The BLS has information security analysts at 29% projected employment growth through 2034. Fifth-fastest growing occupation in the country. And at every level, junior SOC analyst to CISO, it’s one of the hardest roles to recruit for because the qualified candidate pool simply has not kept pace with the demand. Median pay sits at $124,910.

67% of organizations report being short on cybersecurity staff. For 36% of them, filling a senior security role takes a year or more.

A year. Think about that in the context of an attack that took 39 minutes to execute.

What Roles Actually Close This Gap

Twenty-person security org? Overkill for most of the companies we work with. A 50-person startup running one Node.js service has a very different playbook than a 500-person company with microservices and a CI/CD pipeline that deploys to production twelve times a day, and the roles they need reflect that gap. But after an incident like the Axios compromise, it’s worth being specific about which hires would have made the difference and which ones are just panic spending.

DevSecOps engineers build security into the CI/CD pipeline from the start. They’re the ones who would have configured lockfile enforcement, script blocking, and cooldown policies before the attack, not after. If your pipeline ran npm install instead of npm ci on March 31, this is the role you were missing. Typical range right now is $140,000 to $185,000 depending on platform depth and clearance status.

Application security specialists do the dependency auditing and software composition analysis that catches malicious packages. We placed one at a Series C fintech last quarter who found three vulnerable transitive dependencies in their first week, none of which had been flagged by their existing tooling because nobody had configured the tooling correctly. That’s a $165,000 hire who paid for herself before her first monthly check cleared.

Then there’s incident response. The companies that handled March 31 well, the ones who contained exposure within hours instead of days, had IR capability already on staff or on retainer. The ones who didn’t are still figuring out which secrets to rotate as I write this. Good IR professionals don’t come cheap. $150,000 to $190,000 for mid-to-senior, higher for anyone with cloud-native forensics experience.

And increasingly, you need developers who understand AI tooling security. The Claude Code leak showed how deeply AI coding assistants are embedded in modern workflows. Feature flags, background agents, model routing, stealth contribution modes. If your developers are using these tools daily, and most are, someone on your team needs to understand the security surface they introduce. That’s a newer competency and it’s hard to hire for because the job description didn’t exist two years ago.

What the Claude Code Leak Tells Us About AI Tool Risk

Set aside the competitive intelligence angle for a minute. What matters for engineering teams is what the leak revealed about how AI coding tools actually work under the hood.

KAIROS, the always-on daemon mode, means Claude Code can run as a persistent background process watching your development environment. It writes observation logs. It can act without being prompted. The 15-second blocking budget suggests it’s designed to do real work in the background, not just sit idle.

Any CTO or VP of Engineering should be pulling up the vendor security review right about now, or realizing there never was one. What data does this tool have access to on developer machines? What gets sent upstream? What happens to the observation logs? Can you audit any of it? And does anyone on your team actually know how to configure it, or what to look for in those logs if they could?

Most companies adopted AI coding tools fast because the productivity gains were real and immediate, and nobody wanted to be the company that told their engineering team “you can’t use Claude Code or Copilot” while their competitors were shipping features twice as fast. Totally reasonable. But the security review of those tools lagged behind the adoption curve, and the Claude Code leak is a wake-up call that the review needs to happen now, not someday.

For what it’s worth, Anthropic’s response was transparent. They admitted the error, explained the cause, recommended the native installer over npm. That’s better than most companies handle an incident this embarrassing. But “handled the incident well” and “the tool has no security concerns” are different statements, and the second one requires more scrutiny than a blog post can provide.

Hiring manager and cybersecurity candidate shaking hands during DevSecOps interview

Things Teams Are Asking Us Right Now

Did the Axios hack actually affect that many companies?

Axios has roughly 100 million weekly downloads. The compromised versions were tagged as latest, which is what npm install pulls by default. The attack window was about three hours and execution was confirmed in 3% of affected environments. That sounds low as a percentage. It’s not low as an absolute number when the install base is that large. Google attributed it to a North Korean state-backed group, UNC1069, which tells you something about the sophistication level.

Should we stop using npm entirely?

Not even close to the right reaction. But you should stop trusting it blindly, which is what most teams were doing before March 31 whether they realized it or not. Pin exact versions, commit lockfiles, use npm ci, disable postinstall scripts in CI, and implement a cooldown window before allowing newly published versions into your builds. Those five changes eliminate the specific attack vector used in the Axios compromise. They don’t make npm bulletproof, but they raise the bar enough that opportunistic attacks and even some targeted ones bounce off instead of landing.

Is Claude Code safe to use after the leak?

The source code leak itself doesn’t make the tool less functional, and nothing in the leaked code suggests Anthropic was doing anything malicious with user data, which is the first thing everyone jumps to when they hear “source code leak.” The real concern is a timing problem and a transparency problem, both happening at once. Anyone who installed version 2.1.88 via npm on March 31 may have pulled compromised Axios as a transitive dependency, so check that first, because that’s the actual security exposure. On the transparency side, the leaked source revealed features like KAIROS and Undercover Mode that raise questions about what the tool does in the background on developer machines, questions that most engineering teams never thought to ask before the leak made it unavoidable. Anthropic recommends the native installer going forward, which bypasses npm entirely.

Do we actually need to hire a security person over this?

Honest answer, it depends on what you already have in place. If you have a senior engineer who owns your CI/CD pipeline and already enforces lockfile pinning, script blocking, and dependency auditing, you’re probably covered for this specific type of attack, though “probably” is doing a lot of work in that sentence. If March 31 caught you flat-footed because nobody on staff thinks about supply chain security as part of their daily job, then yes, you have a real gap and the gap is going to get more expensive the longer you leave it open. The Axios attack is not the last one of its kind. The next one could target a package in your stack that nobody’s watching.

How fast can we actually get a DevSecOps or AppSec hire in place?

36% of organizations report it takes a year or more to fill senior cybersecurity roles. That’s the industry average. At KORE1, our average fill time for cybersecurity staffing engagements is significantly shorter because we maintain an active pipeline of pre-vetted security professionals across DevSecOps, AppSec, cloud security, and incident response. Contract-to-hire is often the fastest path if you need someone operational within weeks, not months.

Wait, North Korea is hacking npm packages?

Since at least 2018, actually. Google’s Threat Intelligence Group formally attributed the Axios compromise to UNC1069 on April 1, 2026, and it wasn’t the first time a North Korean group showed up in a software supply chain investigation by a long shot. The Lazarus Group has targeted cryptocurrency platforms and developer tools repeatedly over the past several years, sometimes with campaigns that run for months before anyone catches on. UNC1069 is financially motivated, active since 2018, and the malware they deployed in the Axios attack, WAVESHAPER.V2, has been linked to previous campaigns against software development infrastructure. Open-source package registries are high-value targets because one compromised package cascades into millions of installations without the attacker needing to touch any of those systems directly. State-backed groups have the resources and patience to keep doing this.

Where KORE1 Fits In

We’ve been placing cybersecurity and engineering talent for over twenty years. After incidents like the Axios compromise, the calls come in fast. CTOs who suddenly realize their three-person DevOps team has zero dedicated security coverage. VPs of Engineering whose CI/CD pipeline was running npm install instead of npm ci because nobody questioned the default. Companies that adopted AI coding tools org-wide without a security review and now need someone who can conduct one.

Our cybersecurity staffing practice covers DevSecOps engineers, application security specialists, cloud security architects, incident response, and the newer hybrid roles where security expertise intersects with AI tooling knowledge. Contract and contract-to-hire for speed. Direct hire when you need someone permanent.

92% retention rate across our IT staffing practice. 17-day average fill time. Those numbers matter more than usual when the gap you’re filling is the one that stops the next supply chain attack from becoming your incident.

Talk to our team if March 31 exposed a gap you need to close.

Leave a Comment