Hugging Face and ClawHub Abused in Active Malware Distribution Campaign
Threat actors are actively abusing Hugging Face model repositories and the ClawHub AI agent skills marketplace to deliver malware through social engineering, with security researchers documenting over 800 malicious skills targeting OpenClaw users in 2026. The campaigns target AI engineers, developers, and DevOps professionals, exfiltrating cryptocurrency wallet keys, SSH credentials, and browser-stored passwords.
Hugging Face ClawHub Malware: What We Know So Far
ClawHub is the official skills marketplace for OpenClaw, an AI agent platform. Skills are lightweight packages that extend OpenClaw’s capabilities — legitimate use cases include code generation, API wrappers, and automation workflows.
Researchers have documented two concurrent campaigns exploiting these platforms:
ClawHavoc — 335 skills impersonate legitimate packages and use fake prerequisite installation steps to silently execute the Atomic macOS Stealer (AMOS) payload. AMOS is a well-established infostealer that targets macOS users, exfiltrating browser-stored passwords, cryptocurrency wallet seed phrases, private keys, and session cookies. All 335 skills in this cluster share the same command-and-control (C2) infrastructure, pointing to a single threat actor or coordinated group.
Cross-platform credential theft — A second cluster of 575+ malicious skills, published by 13 developer accounts, delivers both Windows and macOS payloads including AMOS variants and cryptominers. Two primary accounts — identified by researchers as hightower6eu and sakaen736jih — account for the majority of uploads.
The barrier to publishing on ClawHub is minimal: a SKILL.md file and a GitHub account at least one week old, with no code signing, no automated security review, and no default sandbox at install time. Snyk’s ToxicSkills study, which scanned 3,984 skills from ClawHub, found that 36% contained prompt injection vulnerabilities and documented 1,467 malicious payloads across the registry.
On Hugging Face, attackers upload malicious model files — typically serialized Python objects in pickle format (.pt, .pkl) — and datasets with embedded payloads. A user who downloads and loads these files without sandboxing executes attacker-controlled Python code at model-load time, a technique that has been observed in multiple prior Hugging Face abuse campaigns. The current wave adds polished repository metadata and well-crafted README files that closely mimic legitimate research repositories.
According to the SecurityWeek report, the common thread across both platforms is social engineering: attackers invest in presentation — plausible package names, formatted documentation, and fake download metrics — rather than relying on zero-day exploits.
Why Hugging Face ClawHub Malware Matters
AI platform users represent a disproportionately high-value target. ML engineers, security researchers, and DevOps professionals routinely hold access to cloud provider credentials, CI/CD pipeline secrets, production API keys, and cryptocurrency holdings. A credential-stealing payload on one of these workstations can pivot directly to cloud environments, container registries, and software supply chains.
This attack pattern is also harder to detect than traditional package manager supply chain compromises. Rather than injecting malicious code into a legitimate package update, attackers create net-new packages with professional presentation and rely on the implicit trust users extend to curated marketplaces. The combination of Hugging Face’s 5 million+ registered models and ClawHub’s rapidly growing skills ecosystem means the attack surface is expanding faster than existing security tooling can cover it.
The scale documented by researchers — 800+ confirmed malicious skills, a 13% critical-flaw rate across recently published skills — suggests these platforms have already become established distribution infrastructure for threat actors, not isolated incidents.
Hugging Face ClawHub Malware: What You Should Do Now
- Audit installed ClawHub skills immediately. Run
openclaw skills listto enumerate installed skills. Cross-reference against the malicious skill hashes published by Snyk and Acronis TRU. Uninstall any unrecognized or unverified skills withopenclaw skills remove <skill-name>. - Revoke and rotate exposed credentials. If you have installed any ClawHub skill in the past 30 days, treat your SSH keys, browser-saved passwords, and all cryptocurrency exchange API keys as potentially compromised. Rotate them proactively.
- Scan Hugging Face downloads before loading. Run downloaded
.pt,.pkl, and.binmodel files through a pickle safety tool before loading them:python -m fickling --check model.pkl. Fickling detects embedded code execution in serialized Python files. Never runtorch.load()orpickle.load()on untrusted files without a sandbox. - Enforce sandboxed skill execution. Until ClawHub implements mandatory code signing, configure OpenClaw to run skills inside an isolated environment. Container-based sandboxing (e.g., rootless Podman or Docker with dropped capabilities) limits the blast radius of a compromised skill.
- Monitor for AMOS persistence indicators. On macOS, check
~/Library/LaunchAgents/and/Library/LaunchAgents/for recently added plist files with randomized names. AMOS installs a persistence agent at these paths. Also review outbound connections from developer workstations to domains registered within the past 60 days.
Detection and Verification Checklist
- Publisher age check: Malicious ClawHub accounts are typically less than one month old and show rapid skill uploads. Reject skills from publishers with fewer than 30 days of account history or with no prior public contributions.
- Inspect pickle files:
fickling --decompile model.pklwill print embedded Python bytecode. A legitimate model file contains only tensor data; anysubprocess,os, orexeccalls are immediate red flags. - Review skill permissions at install time: Skills requesting filesystem access, network access, or credential store access without a documented justification in SKILL.md should not be installed.
- Scan for AMOS telemetry: Monitor DNS queries from developer workstations for newly registered domains (WHOIS age < 60 days). AMOS rotates C2 domains and exfiltrates over HTTPS to these hosts.
- Verify Hugging Face model cards: Legitimate research models include training details, evaluation benchmarks, dataset citations, and linked papers. Models with sparse auto-generated README content and no associated research are high-risk downloads.
- Next-source verification: The Snyk ToxicSkills report and Acronis TRU advisory are the primary technical sources for IOCs. Monitor Snyk’s ClawHub threat feed for updated malicious skill hashes as the campaign evolves.
Sources: SecurityWeek, The Hacker News — 341 Malicious ClawHub Skills, Snyk ToxicSkills Research, Acronis TRU — Poisoning the Well
For any query contact us at contact@cipherssecurity.com

