Blog

GlassWorm Hijacked 72 VSCode Extensions to Steal Your Secrets. Your SOC 2 Program Probably Doesn't Cover IDE Plugins.

GlassWorm planted 72 malicious VSCode extensions on Open VSX to steal secrets and CI/CD tokens. What startups need to know about IDE security policies.

You vet your npm packages. You pin your Docker images. You run SCA scans in CI/CD. But what about the extensions running inside your code editor right now?

Socket researchers disclosed on Friday that a campaign called GlassWorm has been planting malicious extensions on the Open VSX registry since January 31, 2026. Seventy-two extensions so far. They mimic linters, formatters, code runners, and AI coding assistants - including fake versions of Claude Code and Google Antigravity. Once installed, they exfiltrate secrets, CI/CD tokens, cryptocurrency wallet contents, and environment variables to attacker-controlled C2 servers.

This isn't a vulnerability in VSCode. It's a supply chain attack that exploits the same blind spot in every development team I've worked with: nobody governs what extensions developers install in their IDE.

I wrote about the PhantomRaven npm supply chain campaign yesterday. That attack used Remote Dynamic Dependencies to steal CI/CD credentials through npm's own dependency resolution. GlassWorm uses a different vector - IDE extensions instead of packages - but the playbook is the same: infiltrate the developer toolchain, harvest credentials, move laterally. The difference is that most startups have at least some npm hygiene. Almost none have IDE extension policies.

How GlassWorm Works: Transitive Extension Delivery

The clever part of GlassWorm isn't the malware itself. It's the delivery mechanism.

VSCode extensions have a package.json file that includes two fields most developers never think about: extensionPack and extensionDependencies. These fields tell VSCode to automatically install other extensions when you install the parent. It's designed for convenience - an Angular extension pack can bundle a TypeScript linter, an HTML formatter, and a debugging tool into a single install.

GlassWorm abuses this mechanism for transitive delivery. The initial extension looks benign. It might be a simple code formatter with a few hundred downloads and a convincing README. But its extensionPack field references a second extension that contains the actual payload. The developer installs what looks like a harmless tool. VSCode silently installs the malicious dependency. No prompt. No confirmation. No indication that a second extension was added.

Here's a simplified example of what this looks like:

{
  "name": "vscode-xml-extension",
  "extensionPack": [
    "malicious-publisher.hidden-payload-ext"
  ]
}

The developer sees "XML Extension" in their extensions list. They don't see the hidden payload extension unless they explicitly look at their full installed extensions list. And most developers never do.

The C2 Infrastructure: Solana as a Dead Drop

GlassWorm's command-and-control infrastructure is unusually sophisticated for an extension-based attack. Instead of hardcoding C2 server addresses (which defenders can blocklist), the malware reads Solana blockchain transactions to retrieve the current C2 address.

Here's why that matters: you can't take down a blockchain transaction. Traditional C2 takedowns involve contacting hosting providers and domain registrars to shut down malicious servers. When the C2 address is embedded in an immutable blockchain transaction, defenders can't remove the pointer. The attackers rotate C2 servers and publish new Solana transactions pointing to the replacement. The malware queries the blockchain, gets the new address, and keeps exfiltrating.

The campaign also checks the developer's system locale before executing. If the locale is set to Russian, the malware doesn't run. This is a common technique in campaigns originating from Russian-speaking threat actors - they avoid infecting systems in their home country to reduce the risk of domestic law enforcement attention.

What Gets Stolen

GlassWorm isn't looking for source code. It's looking for the keys that unlock everything else:

Target Data Exfiltrated
Environment variables API keys, database credentials, cloud provider secrets
CI/CD tokens GitHub Actions secrets, GitLab CI variables, Jenkins credentials
Cryptocurrency wallets Wallet private keys and seed phrases
SSH keys Private keys for remote server access
Git credentials Tokens for repository access and push permissions
System metadata OS, hostname, username, installed software

This is the same target set as PhantomRaven, and for the same reason: CI/CD credentials and API keys are the fastest path to lateral movement. Steal a GitHub Actions token, and you can inject code into production deployments. Steal a cloud provider key, and you own the infrastructure.

The Parallel Campaign: GitHub Repos and npm

GlassWorm isn't limited to VSCode extensions. Aikido security researchers documented a parallel push across multiple platforms during the same timeframe:

  • 151 GitHub repositories were injected with invisible Unicode characters between March 3-9, 2026. The characters hide malicious code in files that look normal in GitHub's web interface but execute payloads when cloned and run locally.
  • 2 npm packages were compromised: @aifabrix/miso-client and @iflow-mcp/watercrawl-watercrawl-mcp. Both used similar obfuscation techniques.

The multi-platform approach is significant. Defenders who focused exclusively on npm after PhantomRaven would have missed the extension-based and GitHub-based vectors entirely. GlassWorm treats the entire developer toolchain as an attack surface: the registry where you get packages, the editor where you write code, and the repositories where you store it.

There's also evidence that the attackers used large language models to generate convincing cover commits - realistic-looking documentation tweaks, version bumps, and code comments that make compromised repositories look actively maintained by legitimate developers. This is supply chain social engineering at scale.

Your SOC 2 Program Probably Has an IDE-Shaped Hole

I've reviewed dozens of SOC 2 control matrices for startups. Every single one addresses package management in some form - lockfiles, SCA scanning, vulnerability patching cadences. I've never seen one that addresses IDE extensions.

This matters because SOC 2's Trust Services Criteria don't distinguish between a malicious npm package and a malicious VSCode extension. Both are unauthorized software components that accessed sensitive data in your environment. Both represent control failures.

CC6.1 - Logical Access Controls: If a VSCode extension can read your environment variables and SSH keys, it has logical access to those secrets. Your access controls didn't prevent an unauthorized component from accessing sensitive credentials. The fact that the developer voluntarily installed it doesn't change the control failure - CC6.1 requires that access be restricted to authorized components, and a malicious extension masquerading as a code formatter is not authorized.

CC9.2 - Vendor and Business Partner Risk Management: IDE extensions are third-party software from external vendors. CC9.2 requires you to assess the risks associated with third-party components. If your risk assessment process covers npm packages but not IDE extensions, you have a gap in your vendor risk management that an auditor could flag.

CC7.1 - System Monitoring: Your monitoring should detect unauthorized software changes in your environment. A VSCode extension that installs secondary extensions via extensionPack without the developer's explicit knowledge is an unauthorized change. If your monitoring doesn't cover IDE extensions, this change goes undetected.

CC8.1 - Change Management: Extensions that modify your development environment should go through a change management process. In practice, most developers install extensions on a whim during a debug session and never remove them. That's not change management.

If you're building toward SOC 2 compliance, the fix isn't complicated. You just need to extend your existing supply chain controls to cover IDE extensions the same way they cover package dependencies.

A Practical IDE Extension Security Policy

Most startups don't need a 20-page policy document. They need a short, enforceable set of rules that developers will actually follow. Here's what I'd implement:

1. Maintain an Approved Extensions List

Create a list of approved extensions for your team. This doesn't mean blocking everything else (though if your organization supports it, that's ideal). It means establishing a baseline so developers know which extensions have been vetted and which haven't.

A minimal approved list for a TypeScript startup might look like:

Extension Publisher Purpose
ESLint dbaeumer Linting
Prettier esbenp Formatting
GitLens gitkraken Git history
Docker ms-azuretools Container management
Remote - SSH ms-vscode-remote Remote development

The key field is publisher. GlassWorm extensions used publisher names like crotoapp, gvotcha, and tamokill12 - none of which are established publishers. Checking the publisher is the fastest way to identify suspicious extensions.

2. Use the Official VS Code Marketplace, Not Open VSX

GlassWorm specifically targeted the Open VSX registry. Microsoft's VS Code Marketplace has stronger publisher verification and more aggressive automated malware scanning. It's not foolproof - malicious extensions have appeared on the official marketplace too - but the bar for publishing is higher.

If your team uses VSCodium or another Open VSX-based editor, this is a real risk to address. The open registry model that makes Open VSX philosophically appealing also makes it easier for attackers to publish malicious extensions.

3. Audit Installed Extensions Quarterly

Run a quarterly check of what extensions your team actually has installed. You can export the list programmatically:

# List all installed extensions
code --list-extensions --show-versions > extensions-audit.txt

# Compare against approved list
diff <(sort approved-extensions.txt) <(code --list-extensions | sort)

Any extension not on the approved list should be reviewed. Any extension from an unfamiliar publisher should be investigated. Any extension that hasn't been updated in over a year should be questioned - abandoned extensions are acquisition targets for attackers.

4. Restrict Extension Auto-Installation in Managed Environments

If your developers use company-managed machines, you can configure VSCode to restrict extension installation. The extensions.autoCheckUpdates and extension allowlists in settings.json provide basic controls:

{
  "extensions.ignoreRecommendations": true,
  "extensions.autoCheckUpdates": false
}

For stronger controls, consider deploying extensions through a private extension registry or a configuration management tool that pushes approved extensions to developer machines.

5. Check extensionPack and extensionDependencies Before Installing

Before installing any extension from an unfamiliar publisher, check its package.json for extensionPack and extensionDependencies fields. If an extension pulls in secondary extensions from different publishers, that's a red flag.

You can inspect an extension's package.json before installing by downloading the .vsix file and extracting it:

# Download the .vsix from the marketplace website, then inspect
unzip extension.vsix -d extension-contents
cat extension-contents/extension/package.json | grep -A 5 "extensionPack\|extensionDependencies"

If the extension bundles dependencies from publishers you don't recognize, don't install it.

What SOC 2 Auditors Will Ask (and What to Answer)

If you're going through a SOC 2 audit and GlassWorm comes up (or any IDE-based supply chain attack), here are the questions your auditor is likely to ask and the answers they want to hear:

"How do you control what software is installed on developer workstations?"

Good answer: "We maintain an approved extensions list. Developers can request additions through our change management process. We audit installed extensions quarterly."

Bad answer: "Developers install whatever they need."

"How do you assess third-party components in your development environment?"

Good answer: "Our vendor risk assessment covers all third-party software including IDE extensions. We evaluate publisher reputation, extension permissions, and update frequency before approving new extensions."

Bad answer: "We run npm audit."

"How would you detect a malicious extension in your environment?"

Good answer: "Our extension audits would identify unauthorized extensions. Our security toolstack includes endpoint monitoring that would flag unusual outbound connections from development machines. We also monitor for credential use from unexpected locations."

Bad answer: "We trust the marketplace."

The Bigger Pattern: Your Entire Toolchain Is the Attack Surface

GlassWorm and PhantomRaven together paint a clear picture of where supply chain attacks are heading. Attackers aren't just targeting one registry or one package manager. They're targeting the entire developer workflow:

  1. Package registries (npm, PyPI, Packagist) - inject malicious dependencies
  2. IDE extensions (VS Code Marketplace, Open VSX) - compromise the editor itself
  3. GitHub repositories - inject hidden code via Unicode obfuscation
  4. CI/CD actions (GitHub Actions, GitLab CI) - poison the build pipeline
  5. AI coding assistants - impersonate trusted AI tools to gain access

The common thread is trust. Developers trust their package manager to serve safe packages. They trust their IDE marketplace to vet extensions. They trust their Git repositories to contain the code they committed. Each of these trust assumptions is an attack surface.

For startups building a security-first development practice, the lesson is that supply chain security isn't just about npm audit. It's about every tool in the developer's environment, from the editor they write code in to the CI/CD pipeline that deploys it. The organizations that treat their entire toolchain as a trust boundary - not just the package registry - are the ones that will catch the next GlassWorm before it catches them.

Seventy-two malicious extensions. Multiple registries. Blockchain-based C2 infrastructure. LLM-generated cover commits. This is what industrialized supply chain attacks look like in 2026. Your IDE extension policy doesn't need to be complicated. It just needs to exist.


Keep reading:

Need help building supply chain controls for your SOC 2 program? Let's talk.