A critical prompt injection vulnerability in Gemini CLI — Google's open-source command-line AI coding agent — could have allowed an attacker to execute arbitrary commands on developer systems and push malicious code directly to the tool's main GitHub repository, distributing a backdoor to every downstream user who updated the package. Pillar Security discovered and reported the flaw. Google patched it in version 0.39.1, released April 24, 2026.
The Vulnerability: Prompt Injection in --yolo Mode
No CVE identifier has been assigned to this vulnerability. Pillar Security rated it at a severity score of 10 out of 10 (Critical).
The flaw exploited prompt injection — a class of attack specific to AI systems where malicious text embedded in data processed by the AI causes it to deviate from intended behavior and execute unintended actions. The concept is analogous to SQL injection (a decades-old technique where database commands are smuggled inside user-supplied input to manipulate database queries): instead of injecting SQL into a query, the attacker injects instructions into content an AI agent is expected to process, and the model obeys those instructions as if they were legitimate operator commands.
The vulnerability existed specifically in Gemini CLI's --yolo mode — an operational setting that automatically approves all tool calls without prompting the user for confirmation. In standard interactive mode, Gemini CLI pauses and asks before executing potentially sensitive operations (file writes, shell commands, API calls). With --yolo active, those confirmation prompts are bypassed entirely for speed and automation.
The attack chain worked as follows:
- A developer or automated pipeline runs Gemini CLI in
--yolomode as part of a GitHub Actions workflow configured to auto-triage new issues on a public repository. - An attacker opens a GitHub issue on the target repository containing injected prompts — text that appears to be a legitimate bug report or feature request but also embeds instructions directing the AI agent to execute attacker-controlled shell commands.
- The Gemini CLI agent processes the issue content, encounters the injected instructions, and — with
--yolomode having disabled all approval allowlists — executes the commands without restriction. - From those initial commands, the attacker obtains repository authentication credentials and escalates to a token with full write access to the repository — sufficient to push arbitrary code directly to the main branch.
Supply Chain Impact: Every Downstream User
The escalation from individual developer compromise to supply chain attack — where a trusted software package is compromised at the source to deliver malicious code to all its users — is what elevates this vulnerability to Critical status.
Because the run-gemini-cli GitHub Action was used in CI/CD workflows across the Gemini CLI repository itself, a successful attack would have allowed an adversary to push backdoored code to google-gemini/gemini-cli's main branch. That code would have been packaged into the next release and shipped to every developer who runs npm install @google-gemini/gemini-cli or npm update — potentially compromising thousands of developer environments globally without any action required on the victim's part.
Pillar Security's analysis found that at least eight other Google repositories used the same vulnerable workflow template, meaning the attack surface extended well beyond a single project. Any Google-internal or third-party repository running the same run-gemini-cli Action with untrusted trigger events was theoretically at risk.
The Headless Mode Trust Issue
The 0.39.1 patch also addressed a second, separate vulnerability in Gemini CLI's headless mode — the non-interactive execution mode designed for automation pipelines. In headless operation, Gemini CLI automatically trusted the current workspace folder without verification, potentially exposing credentials, .env files, API keys, and service account keys stored in that directory to the AI agent's processing pipeline. Malicious instructions arriving through a prompt injection attack could then redirect those secrets to an attacker-controlled endpoint.
Who Is Affected
Any organization or developer using Gemini CLI versions prior to 0.39.1 — particularly those running it in --yolo mode within GitHub Actions workflows triggered by external events (issue creation, comments, PR submission) — was exposed to this attack vector.
Gemini CLI is a newer tool (released 2025) but has seen rapid adoption among developers using AI assistance for code review, refactoring, and issue triage automation. The combination of AI agents with repository write access and automated CI/CD triggers creates a new category of attack surface that is not covered by traditional SAST (Static Application Security Testing — tools that analyze source code for security flaws without executing it) or DAST tools.
What You Should Do Right Now
- Upgrade Gemini CLI to version 0.39.1 or later immediately. Run
npm install -g @google-gemini/gemini-cli@latestto update. Verify withgemini --version. - Audit
--yolomode usage in CI/CD pipelines. Review whether any automated workflow invokes Gemini CLI with--yolo. If yes, assess whether those workflows are triggered by untrusted external events (external contributor issues, comments, or PRs). - Inspect GitHub Actions trigger events. For any workflow running Gemini CLI, check the
on:block forissues: opened,issue_comment, orpull_requesttriggers from forks or external contributors. Consider requiring maintainer approval before the Gemini CLI action fires on external-origin events. - Review recent repository commit history. If your installation used a pre-0.39.1 version in an exposed workflow, examine recent commits for unexpected changes to CI/CD scripts, configuration files, or code logic you did not author.
- Audit workspace directories used in headless mode. Identify directories containing credentials or secrets where Gemini CLI runs non-interactively, and ensure those files are not accessible to the agent's processing context.
# Check installed version
npm list -g @google-gemini/gemini-cli
# Update to latest
npm install -g @google-gemini/gemini-cli@latest
Background: AI Agents and the Prompt Injection Problem
Prompt injection is emerging as one of the most structurally challenging vulnerability classes in AI-integrated software. Unlike traditional injection attacks — SQL injection, command injection, path traversal — which exploit predictable parsing behaviors in deterministic systems, prompt injection exploits an AI model's inability to reliably separate instructions from its legitimate operator from instructions embedded in external data it processes.
As AI coding agents gain the ability to read codebases, write files, execute shell commands, make API calls, and commit to version control, the consequence of a successful prompt injection escalates dramatically. An agent that can write to production code is a far more powerful attack surface than a chatbot that answers questions.
The Gemini CLI case is instructive precisely because the vulnerable mode (--yolo) was designed to remove friction in automation — but removed the human-in-the-loop oversight that prevents AI agents from being weaponized by injected instructions. This is a known tension in agentic AI design: the more autonomy granted to an agent, the larger the blast radius of a successful injection.
Pillar Security's research and independent analyses across the AI security community indicate this is a systemic issue across multiple AI development tools, not a problem unique to Gemini CLI. Organizations adopting AI coding agents should treat prompt injection as a first-class threat model consideration — specifically, what data sources does the agent process, and what permissions does it hold?
Conclusion
The Gemini CLI prompt injection vulnerability is a demonstration that AI coding agents with repository write access and automated trigger mechanisms represent a new and potent supply chain attack surface. Update to version 0.39.1, restrict --yolo mode in untrusted contexts, and review GitHub Actions workflows for prompt-injectable trigger events.
For any query contact us at contact@cipherssecurity.com

