News

NHS England Orders GitHub Repos Private Over AI Vulnerability Analysis Fears

NHS England Orders GitHub Repos Private Over AI Vulnerability Analysis Fears

NHS England is ordering its technology leaders to lock down hundreds of public GitHub repositories by May 11, 2026, citing concerns about the capabilities of Anthropic's AI model Mythos to identify security vulnerabilities in open-source code at a scale and speed previously impossible with human teams. The directive has ignited a debate among security professionals about whether security-by-obscurity — the practice of hiding code to prevent analysis — offers meaningful protection against AI-assisted vulnerability discovery, or whether it represents a category error that sacrifices the benefits of open development without addressing the underlying risk.

The NHS Directive

According to The Register's reporting, internal NHS England guidance shared across the organization's technology leadership instructs that GitHub repositories must be converted from public to private by May 11. The guidance cites Anthropic's Mythos (Anthropic's most capable AI model, described by Anthropic as able to rapidly identify security vulnerabilities that skilled human teams would miss — it previously discovered thousands of unknown zero-day flaws across major operating systems and web browsers, including a 27-year-old vulnerability in OpenBSD) as the specific capability driving the decision.

The NHS runs hundreds of open-source projects on GitHub, covering internal tooling, documentation, architecture diagrams, data processing scripts, and code for patient-facing applications. The diversity of that repository catalog is significant context for evaluating the directive: the vast majority of NHS repositories contain code that is architecturally visible, non-sensitive, or already widely analyzed by human security researchers.

The Security Case For and Against

The argument for closing repositories: If a sophisticated AI model can systematically analyze an entire codebase in hours and produce a structured list of exploitable vulnerabilities — a capability that would take a skilled human security team weeks — then making that codebase publicly accessible provides meaningful uplift to potential attackers. For repositories containing code that runs on patient-facing systems or internal NHS infrastructure, reducing that AI-analysis attack surface has some merit.

The argument against: Terence Eden, former head of open technology at NHSX (the digital innovation unit of NHS England), was publicly critical of the move. Eden's argument, reported across multiple outlets and confirmed by security researchers, has two components:

  • Archived copies remain. Any code that has been public for more than a few days has likely been indexed, archived by Wayback Machine, mirrored by other developers, or analyzed by academic researchers. Making a repository private does not erase those copies. An AI model with access to web-crawled data or code archives can still analyze the historical versions of the code.
  • Concealment is not security. The bugs in the code still exist whether or not the source is visible. Hiding the source prevents transparent community security review but does not patch the vulnerabilities. If an attacker has a copy of the repository (from any cached source) and NHS England does not, the attacker can map vulnerabilities that the NHS is no longer publicly warned about by the open-source community.

Security researcher consensus, as captured in the Neowin coverage, aligns with Eden: "the argument that concealment equals security is a category error." The correct response to AI-accelerated vulnerability discovery is not to hide code; it is to implement the same AI-assisted analysis defensively — to find and fix vulnerabilities before attackers do.

Anthropic Mythos and AI-Powered Vulnerability Discovery

Anthropic's Mythos represents a genuine step change in automated vulnerability analysis capabilities. Reports indicate that Mythos was able to discover thousands of previously unknown zero-day vulnerabilities (security flaws that were not publicly known — the term "zero-day" refers to the fact that defenders have had zero days to prepare a fix) across every major operating system and web browser, including a 27-year-old bug in OpenBSD, by systematically analyzing code at a depth and breadth beyond human team capacity.

Mythos is currently restricted to Forrester Project Glasswing — a controlled access program available only to select organizations. This restriction limits current real-world exposure. However, the capability trajectory is clear: models with comparable code analysis capabilities will become broadly available, whether through Anthropic's commercial release path or through open-weights models fine-tuned for security research (which already exist, such as the growing ecosystem of security-focused LLMs built on open-weights base models).

The NHS's concern is therefore not irrational. The window during which only select organizations have access to Mythos-class capabilities is finite. The question is whether closing GitHub repositories is an effective response to that window, or whether it delays rather than prevents the risk while sacrificing open-source development benefits.

Implications for Healthcare and Public Sector Open Source

The NHS operates some of the largest digital health infrastructure in the world. The NHS App, Patient Record systems, and the underlying platform code for services like NHS 111 (the UK's non-emergency medical helpline) and NHS 999 dispatch systems represent an attack surface of significant national security consequence.

Open-source development of these systems provides benefits that are well documented: external security researchers regularly audit open codebases and report vulnerabilities responsibly; developers outside the NHS contribute bug fixes and improvements; and public accountability is served by transparent code. The NHS benefited from this dynamic — bugs in NHS systems have been identified and reported by the open-source community on multiple occasions.

Closing repositories inverts this dynamic. The code becomes analyzable only by those with motivation and resources to obtain it (attackers with cached copies or crawled data) while the legitimate defensive community loses visibility. This asymmetry favors the attacker.

The Computing.co.uk analysis notes that very few of the hundreds of NHS open-source repositories contain anything genuinely sensitive. The directive appears to treat all repositories as equivalent regardless of content, which maximizes the cost (lost open-source benefit) relative to the security gain.

What Organizations Should Actually Do

The correct response to AI-accelerated vulnerability discovery is not obscurity — it is AI-accelerated defense:

  • Use AI tools offensively, defensively. Deploy code analysis tools (GitHub Advanced Security, Semgrep, Snyk, or commercial AI-powered SAST — Static Application Security Testing — platforms) against your own repositories before attackers do. Find and fix your vulnerabilities under the assumption that sophisticated adversaries have already analyzed your code.
  • Maintain a responsible disclosure program. If your code is publicly accessible, make it easy for researchers to report vulnerabilities. A well-run disclosure program turns the open-source community into a defensive asset rather than a liability.
  • Apply risk tiering to repository visibility decisions. Not all code carries the same risk. Authentication logic, cryptographic implementations, and patient-data handling code warrants closer scrutiny and potentially restricted access. Documentation, architecture diagrams, and internal tooling do not.
  • Patch vulnerabilities found by AI tools. Whether discovered by Mythos, Semgrep, or a human researcher, a vulnerability in code is a vulnerability regardless of who knows about it. The response to AI-accelerated discovery must be AI-accelerated remediation.
  • Engage with the open-source security community. Programs like GitHub Security Advisories and coordinated vulnerability disclosure (CVD) frameworks allow organizations to receive and act on vulnerability reports without the code being publicly exploited during the remediation window.

Background: Understanding the Risk

Security by obscurity (the practice of treating the secrecy of a system's design or implementation as a security control) has a long and consistent history of failure as a standalone strategy. The fundamental problem is that obscurity is binary: once broken (through reverse engineering, code leakage, or archived copy analysis), it provides zero residual protection. By contrast, real security controls — authentication, input validation, least-privilege access, encryption — remain effective even when an attacker has full knowledge of the system.

AI-powered vulnerability discovery accelerates the timeline for breaking obscurity, but it does not change the structural argument against relying on it. If NHS code contains exploitable vulnerabilities, those vulnerabilities are dangerous regardless of whether the source is public. The difference between public and private source is how quickly a defender can learn about and fix those vulnerabilities — and private source means the defender community is smaller, not the attacker community.

Conclusion

NHS England's decision to close-source hundreds of GitHub repositories by May 11, 2026 is a policy response to a real capability concern — Anthropic Mythos represents a genuine acceleration in AI-assisted vulnerability analysis. But the response conflates code visibility with code security. The more durable approach is defensive use of the same AI tools: systematically scanning NHS repositories for vulnerabilities, fixing them, and maintaining responsible disclosure programs that make the open-source community a defensive partner rather than an excluded audience. As Terence Eden and multiple security researchers have noted, hiding code that has already been public provides marginal protection while sacrificing the proven benefits of open development.

For any query contact us at contact@cipherssecurity.com

Leave a Reply

Your email address will not be published. Required fields are marked *