News

Braintrust AWS Breach Exposes AI Provider API Keys, All Customers Ordered to Rotate Secrets

Braintrust AWS Breach Exposes AI Provider API Keys, All Customers Ordered to Rotate Secrets

AI evaluation and observability startup Braintrust confirmed on May 5, 2026 that attackers had gained unauthorized access to one of its internal AWS (Amazon Web Services) accounts, compromising AI provider API keys stored by its enterprise customers. Braintrust has directed all organization administrators who stored third-party AI model credentials on the platform to immediately delete and regenerate those secrets. One customer has been confirmed impacted; three additional customers reported suspicious spikes in AI provider usage following the incident.

What Braintrust Is and Why This Matters

Braintrust is an AI evaluation platform — a tool that lets engineering and product teams test, benchmark, and monitor large language model (LLM) outputs in production. To do its job, Braintrust needs to call the AI providers directly on behalf of its customers, which means customers store their API keys (authentication tokens that grant access to LLM APIs from providers such as OpenAI, Anthropic, and others) inside the Braintrust platform.

This credential storage model creates a specific risk: the platform becomes, in effect, a vault for AI provider secrets across every customer organization. When that vault is breached, the exposure is not limited to Braintrust's own systems — it extends to every AI provider account whose credentials were stored there.

Security researcher Jaime Blasco, co-founder of Nudge Security (a non-human identity security firm), described the underlying dynamic precisely: "Every AI eval, observability, and gateway tool a company adopts becomes a credential warehouse, and those warehouses are now a tier-one target." Braintrust raised $80 million in a Series B round in February 2026 at an $800 million valuation, reflecting the rapid enterprise adoption of AI evaluation tooling — and the corresponding increase in the attack surface these platforms represent.

Breach Timeline and Technical Details

| Date | Event | |—|—| | May 4, 2026 | Unauthorized access detected in one of Braintrust's internal AWS accounts | | May 5, 2026 | Braintrust contained the incident, locked down the compromised account, rotated internal secrets, published disclosure on its website | | May 6, 2026 | Customer email notifications sent to all org admins with stored AI provider secrets; TechCrunch and other outlets published initial reports | | May 8, 2026 | SecurityWeek coverage published |

The specific attack vector — phishing, credential stuffing, misconfigured AWS IAM (Identity and Access Management) policy, or supply chain compromise — was not disclosed in Braintrust's public statement, and the company stated the cause remained under active investigation.

Braintrust's official communications confirmed that the compromised AWS account held customer-stored API keys used to access cloud-based AI models. The company stated it had "locked down the compromised account, audited and restricted access across related systems, and rotated internal secrets."

Who Is Affected

Braintrust confirmed one customer with direct evidence of impact and noted that three additional customers experienced suspicious activity in the form of unexpected spikes in AI provider API usage — a characteristic indicator that stolen API keys were being used to make unauthorized LLM inference calls, likely to generate output at the victim's cost.

The precautionary customer notification went to all org admins who had stored AI provider secrets within the platform, even those with no confirmed evidence of exposure, reflecting industry-standard practice for incidents with potential broad impact on stored credentials. Braintrust stated it had "not found evidence of broader exposure based on our investigation to date."

Security researcher analysis from Nudge Security noted that the range of potentially exposed credentials likely extended beyond LLM providers to include keys for other cloud services that Braintrust customers store alongside their AI credentials — potentially including SaaS and cloud platform tokens from providers used in AI application pipelines.

Exploitation Risk: What Stolen AI API Keys Enable

An attacker in possession of a stolen AI provider API key can:

  • Burn API credits at the victim organization's expense by running large-scale inference jobs (a practice sometimes called "LLM jacking").
  • Exfiltrate prompt and response logs if the provider's API also grants access to history or fine-tuning datasets.
  • Impersonate the victim organization when calling the AI provider's API, potentially accessing proprietary system prompts or model configurations.
  • Pivot to other services if the same credentials are used across platforms (credential reuse is common in API token management).

API key theft from AI tooling is an emerging and documented threat vector. The 2025 LLM jacking campaigns documented by Sysdig showed attackers systematically scanning for exposed Anthropic, OpenAI, and AWS Bedrock credentials in public repositories and exploiting them for inference at scale.

What You Should Do Right Now

If your organization uses Braintrust and has stored AI provider API keys in the platform, treat this as an active incident requiring immediate response:

  • Log into your Braintrust account and navigate to your organization's settings panel.
  • Delete all existing AI provider secrets currently stored in Braintrust — do not wait for confirmation that your specific keys were exposed.
  • Generate new API keys from each AI provider (OpenAI, Anthropic, Google Vertex, AWS Bedrock, etc.) and enter them into Braintrust.
  • Revoke the old keys at the provider level immediately after replacement — rotation is only effective if the old credential is deactivated.
  • Review your AI provider usage logs for the period May 1–8, 2026 for unexplained inference volume, unusual model selections, or calls from unexpected source IPs.
  • Audit all other platforms where AI provider keys are stored (CI/CD secrets, infrastructure-as-code repositories, monitoring tools) and consider whether a broader API key rotation is warranted across your AI toolchain.

Background: The AI Credential Warehouse Problem

The Braintrust incident is an early-documented example of a structural risk emerging from the rapid adoption of AI tooling in enterprise environments: each evaluation platform, observability layer, gateway, and fine-tuning tool that gets integrated into an AI pipeline becomes another location where sensitive credentials must be stored, and therefore another potential point of compromise.

Traditional secrets management — storing API keys in HashiCorp Vault, AWS Secrets Manager, or equivalent — provides control and audit logging over credential access. AI tooling integrations often bypass this layer, asking users to paste credentials directly into a web interface for convenience. The result is credential storage dispersed across a growing ecosystem of third-party platforms, each with its own security posture, each a potential target.

This dynamic is structurally similar to the 2024 Snowflake credential theft campaign, in which attackers systematically targeted cloud data platform vendors to access the underlying customer data they hosted. In the AI context, the "data" being targeted is not just records but the authentication tokens that enable AI model access — which carry both financial cost (inference billing) and data exposure risk.

The CISA Secure by Design initiative and NIST AI Risk Management Framework both identify third-party AI service integrations as an area requiring explicit credential governance policies. Organizations building AI application stacks should maintain a live inventory of every platform that holds AI provider credentials and establish mandatory rotation schedules regardless of whether a breach has been confirmed.

Conclusion

Braintrust's AWS account breach exposed AI provider API keys stored by enterprise customers, and the company has ordered immediate credential rotation across its entire customer base. Any organization using Braintrust should complete API key rotation today, then audit the broader AI toolchain for other credential stores that represent similar exposure. As AI evaluation and observability platforms proliferate, securing the credentials they hold requires the same rigor applied to any tier-one secrets management system.

For any query contact us at contact@cipherssecurity.com

Leave a Reply

Your email address will not be published. Required fields are marked *