• Superintelligence
  • Posts
  • Microsoft signs UK AISI + US CAISI. NHS locks GitHub. Berlin closes a decade.

Microsoft signs UK AISI + US CAISI. NHS locks GitHub. Berlin closes a decade.

Microsoft signed parallel pre-deployment review agreements with the UK AI Security Institute and the US CAISI on the same day. The UK scope went deliberately broader. The White House is drafting an executive order on AI vetting. NHS England is locking down hundreds of public GitHub repositories by Monday, citing Mythos by name. And Rise of AI closed its tenth anniversary in Berlin.

In partnership with

Superintelligence Europe 034
● Vol. I / Issue 034Thu 7 May 2026
Superintelligence
Europe
Four signals from May 6 · AISI signs Microsoft · Trump weighs vetting · NHS locks GitHub · Berlin closes a decade
● Published Thu 7 May 2026 · 06:00 CET · Covering events of Wed 6 May 2026
Pre-release
Now structural.
Both shores.
Microsoft signed parallel pre-deployment review agreements with the UK AI Security Institute and US CAISI on the same day. The White House is now drafting an executive order. NHS England is locking down its public code by May 11. Pre-release vetting moved from talking point to operational fact in 48 hours.
—  The Signal · Editor’s Note
Pre-release review became a structural fact — and Berlin closed its tenth chapter
Yesterday made operational what had only been editorial argument the day before. Microsoft signed parallel agreements with both the UK AI Security Institute and the US Center for AI Standards and Innovation, putting frontier-model pre-deployment review on a transatlantic footing for the first time. The White House is now openly drafting an executive order to formalise federal vetting in response to Anthropic’s Mythos, the very cyber capability that has driven this week’s entire policy cascade. NHS England, citing Mythos by name, has ordered hundreds of public GitHub repositories closed by Monday. And in Berlin, three hundred decision-makers gathered at Humboldt Carré to mark ten years of building Europe’s AI ecosystem — and to debate, in person, the same questions that all of the other rooms had been asking. The frontier-AI safety architecture is no longer a theory. It is a set of signed agreements, drafted orders, and locked repositories.
LeadUKAISI
01
London · Microsoft · AI Security Institute · 5–6 May
Microsoft signed with the UK AISI on the same day as CAISI — and the UK’s remit went deliberately broader
Two pre-deployment review agreements signed in parallel, one in Washington, one in London. The US scope is national-security capability evaluation. The UK scope is high-risk capability plus safeguard testing plus societal-resilience research — including emotional dependency, mental-health interactions, and the erosion of trust in human professionals. Britain has just publicly extended its frontier-safety remit further than any peer institution.
✓ VERIFIED  Microsoft (primary) · AISI Blog · 5–6 May

Microsoft on Tuesday announced new agreements with the UK AI Security Institute (AISI) and the US Center for AI Standards and Innovation (CAISI), structuring its frontier-model evaluation relationship with both governments simultaneously. With CAISI, the agreement covers pre-deployment evaluation of national-security capabilities — cyber, biosecurity, chemical weapons. The UK agreement covers high-risk capability evaluation, safeguard testing, and a third strand: research on how conversational AI systems interact with users in sensitive contexts. AISI describes the partnership as “sustained two-way collaboration between government and companies developing and deploying frontier AI”.

For Europe, the structural detail is that the UK has, on this day, deliberately positioned its frontier-safety regime as broader in scope than the US one. AISI’s remit now publicly extends into societal-resilience work — areas including emotional dependency on AI systems, mental-health interactions, and the erosion of trust in human professionals such as doctors and therapists. These have been recognised research priorities for AI safety institutes globally, but UK and US agencies have moved cautiously into them. Anchoring this work in a Microsoft commercial partnership puts it on a different footing.

The transatlantic alignment is now visible. CAISI signed Tuesday with Google DeepMind, Microsoft, and xAI, building on its 2024 partnerships with OpenAI and Anthropic that have been renegotiated. AISI’s separate Microsoft partnership is the largest commercial counterparty Britain has signed. The European Union has neither an institutional equivalent nor a comparable agreement with any frontier lab. The AI Act regulates how AI is deployed inside the Single Market. It does not give Brussels access to the models before they are deployed. London does have that access. Washington does have that access. Brussels does not. That gap is now operationally visible across two consecutive days of formal announcements.

2 inst.
0 in EU
UK AISI · US CAISI · Pre-deployment review institutions
No EU-level equivalent. AISI scope: capability + safeguard + societal resilience. CAISI scope: cyber, bio, chemical national security.
Natasha Crampton · Chief Responsible AI Officer, Microsoft · 5 May
“Testing for national security and large-scale public safety risks necessarily must be a collaborative endeavor with governments. This type of testing depends on deep technical, scientific, and national security expertise that is uniquely held by institutions like CAISI in the US and AISI in the UK.”
WashingtonExecutive OrderMythos
02
Washington · White House AI Working Group · NYT · 4–6 May
Trump is now drafting an executive order to vet AI models before public release — a fifteen-month policy reversal driven by a single model
When Donald Trump returned to office on January 20, 2025, his first day in office, he revoked Biden’s 2023 executive order requiring AI safety reporting. Yesterday it became clear he is preparing one of his own. The catalyst, according to NYT and Reuters reporting, is Anthropic’s Mythos. The structural model under discussion mirrors the UK AISI framework.
✓ VERIFIED  NYT (primary) · Reuters · CIO · Axios · WSJ · 4–5 May

According to reporting first published in the New York Times and confirmed by Reuters, the Wall Street Journal, and Axios, Trump administration officials are now actively considering an executive order that would create a federal review process for new artificial-intelligence models before they reach the public. A White House AI working group has been convened. Tech executives from OpenAI, Google, and Anthropic have been briefed. The catalyst, on the record, is Anthropic’s Claude Mythos Preview — the same model that triggered the Eurogroup’s access demands on Monday.

The reversal is sharp by any measure. On January 20, 2025, Trump revoked Biden’s October 2023 executive order, which had used the Defense Production Act to require developers of high-risk AI systems to share safety-test results with the federal government before deployment. Three days later he issued his own order, “Removing Barriers to American Leadership in Artificial Intelligence”, signalling deregulation as the policy frame. In June 2025, his administration renamed the AI Safety Institute to the Center for AI Standards and Innovation. Commerce Secretary Howard Lutnick described the rebrand as a repudiation of safety being “used under the guise of national security”. The new discussions reverse roughly fifteen months of deregulatory posture, in response to a single model.

For Europe, the implications are structural. A US pre-release vetting regime would close part of the regulatory asymmetry that has been visible all week — the AI Act regulates deployment, but the most consequential decisions about frontier models have happened upstream of deployment. The structural model under discussion at the White House, according to multiple reports, deliberately mirrors the UK AISI framework. If a US executive order is signed, the world’s three principal frontier-AI safety regimes — CAISI, AISI, and a still-to-be-named US successor framework — will sit in the United States and the United Kingdom. The European Union’s AI Act, finalised in 2024 and now days from its August 2 high-risk deadline, will not occupy that layer of the architecture.

15 mo
180°
Trump policy reversal · January 2025 to May 2026
Revoked Biden EO 14110 day one. Renamed AI Safety Institute to CAISI in June 2025. Now drafting pre-release review. Catalyst: Mythos.
The structural fact
If the executive order is signed, the world’s three principal frontier-AI safety regimes will be: UK AISI, US CAISI, and a US successor framework. The European Union, with the AI Act ten weeks from its high-risk deadline, will not occupy that layer of the architecture. The asymmetry is structural, not accidental.
BerlinRise of AIDecade
03
Berlin · Humboldt Carré · Rise of AI 10th Anniversary · 6 May
Rise of AI closed its tenth anniversary at Humboldt Carré — a decade of European AI, debated in person while the rest of the week happened online
Three hundred decision-makers in person. Fourteen hundred online. Two stages. Forty-plus curated formats including Topic Tables, the Cognitive Lab, and the PIABO Media Lounge. Theme: technological sovereignty, trusted infrastructure, real-world implementation, regulation, long-term competitiveness.
✓ VERIFIED  Rise of AI (primary) · Eventbrite · 6 May

The Rise of AI Conference 2026 closed its tenth-anniversary edition on Wednesday at Humboldt Carré in Berlin. Founded by Fabian Westerheide in 2015 as a Singularity discussion meetup, Rise of AI has grown over a decade into one of the most consistently relevant gatherings of European AI decision-makers — deliberately constrained at around 300 in-person seats and roughly 1,400 online viewers in order to keep the conversations substantive rather than the audience large.

The two-stage programme — the Meta Stage for policy and business, the Applied AI Stage for practical deployment — covered four core themes: applied AI, regulation and policy, sustainable and trustworthy AI, and strengthening Europe’s AI ecosystem. Confirmed speakers included Prof. Dr. Jürgen Schmidhuber (Director of the AI Initiative at KAUST and a foundational figure in modern deep learning); Prof. Dr. Peter Sarlin (co-founder and CEO of AMD Silo AI); Dr. Irakli Beridze (Head of the UN Centre for AI and Robotics, UNICRI); and Prof. Dr. Feiyu Xu (Professor of Industry AI at the German University of Digital Science). Speakers from deepset, Cloudian, and the Berlin Senate Department for Economics and Energy joined the Applied AI Stage rotation.

The conference’s timing this year landed unusually well. The same week brought the SAP/Prior Labs €1 billion commitment, the EEA briefings on AI’s environmental footprint, the Eurogroup’s Mythos access standoff, the CAISI agreements with three frontier labs, AISI’s Microsoft partnership, and now reports of a Trump executive order on pre-release review. The Berlin convening was, in effect, the European AI ecosystem’s opportunity to debate in person what every other room this week had been debating institutionally. The continuing question, raised repeatedly across both stages, is whether Europe will be a builder or a buyer of frontier capability over the next decade — and whether the answer is set in Berlin, in Brussels, or, increasingly, in Washington and London.

10 yrs
Rise of AI 10th anniversary · Berlin · Humboldt Carré · 6 May
300 in person + 1,400 online. Meta Stage + Applied AI Stage. Schmidhuber, Sarlin, Beridze, Xu. Hosts: Fabian + Veronika Westerheide.
Ten years on
A decade ago Rise of AI was a meetup in Berlin debating whether the Singularity was near. Yesterday it convened the operators of an industry. The question for the next ten years is whether Europe will be a builder or a buyer — and whether the answer is set by the people in this room, or by the institutions in others.
UKNHS EnglandCyber
04
London · NHS England · CYBERUK 2026 · NCSC · 6 May
NHS England is closing hundreds of public GitHub repositories by May 11 — explicitly citing Mythos as the reason
A British national health service is preemptively closing its open-source posture because of an American-built frontier model. The same day, the UK National Cyber Security Centre published its first formal position on AI cyber defence at CYBERUK 2026. The European AI sovereignty debate moved from Brussels policy talk to operational reality, on the ground.
✓ VERIFIED  Resultsense (NCSC primary) · NHS England directive · CYBERUK 2026 · 6 May

On April 29, 2026, NHS England issued an internal guidance note designated SDLC-8, ordering that all source code repositories “must be private by default” and may not be public “unless there is an explicit and exceptional need.” The compliance deadline is May 11, 2026; teams seeking exemption had to apply by May 6 — yesterday. The guidance was approved by the NHS Engineering Board. The justification, named explicitly: the risk that frontier AI models — specifically Anthropic’s Mythos — could ingest the code and reason over it for vulnerabilities. The story was broken by The Register and New Scientist; the leak source was Terence Eden, a former NHSX adviser and prominent UK open-source advocate.

An NHS England spokesperson told The Register: “We are temporarily restricting access to some NHS England source code to further strengthen cybersecurity while we assess the impact of rapid developments in AI models. We will continue to publish source code where there is a clear need.” The reaction inside the open-source community has been sharp. An open letter on Keep Things Open has now collected over 682 signatures, including former UK Health Secretary Matt Hancock, who described the policy as a “huge mistake”. Critics including Eden argue that Mythos has likely already ingested the public code — closing it now does not retract that — and that the NHS’s own service standard requires open-source publication of taxpayer-funded software. Neither the UK AI Security Institute nor the NCSC has recommended this action.

The same day, the UK’s National Cyber Security Centre (NCSC) published its first formal position on AI in cyber defence at the CYBERUK 2026 conference. Deputy chief technology officer Peter Haigh warned that AI can improve threat detection, vulnerability discovery, software security, and incident response — but that frontier tools are unreliable, hard to validate, and hard to integrate safely. “In the near term, AI is likely to expose weaknesses in organisations that have not taken appropriate steps to secure their systems,” Haigh said. The NCSC published an eight-pillar risk framework that UK enterprise security leaders can now use as the audit basis for AI defence procurement. Haigh’s remarks were delivered alongside the keynote address by Security Minister Dan Jarvis.

The week’s pattern across UK institutions is consistent. AISI signed Microsoft. NCSC published its formal AI-cyber position. NHS England is locking down its public code base. The Department for Science, Innovation and Technology has positioned the UK as the country with the most articulated frontier-safety regime outside the United States. For the European Union, the contrast is increasingly visible: the response to Mythos in Brussels has been a request for access. The response in London has been the closing of doors that frontier models could otherwise have walked through. Both are responses. They are not the same response.

SDLC-8
May 11
NHS England guidance · Issued 29 April · Mythos cited explicitly
Hundreds of repos. Engineering Board approved. Open letter: 682+ signatures incl. Matt Hancock. Neither AISI nor NCSC recommend the action.
Peter Haigh · Deputy CTO, NCSC · CYBERUK 2026 · 6 May
“In the near term, AI is likely to expose weaknesses in organisations that have not taken appropriate steps to secure their systems.”
Quote of Record
“Testing for national security and large-scale public safety risks necessarily must be a collaborative endeavor with governments.”
Natasha Crampton · Chief Responsible AI Officer, Microsoft
On The Issues blog · Announcing parallel agreements with AISI and CAISI · 5 May 2026
■  AI Tool of the DayDaily · Mon–Fri
Pharia by Aleph Alpha
aleph-alpha.com · Sovereign LLM · Heidelberg
Editor’s pick
8.5/10
The Heidelberg-built sovereign large language model now at the centre of Europe’s most ambitious transatlantic AI consolidation
Pharia is Aleph Alpha’s family of specialised large language models, built for the sovereign deployment requirements of governments and regulated industries. Trained with European languages and tokenisers as first-class citizens. Customers include Deutsche Bank, Bosch, and SAP. As of April 24, 2026, Aleph Alpha is in the process of merging with Cohere to form a $20 billion transatlantic sovereign-AI entity, anchored in Heidelberg and Toronto, backed by the Schwarz Group with €500 million in financing. The deal makes Pharia’s positioning more, not less, strategic: it is the European model that just became the European half of a continental commercial counterweight to OpenAI and Anthropic.
Best for
EU public sectorRegulated industriesEuropean languagesSovereign deployment
Editor’s verdict
“Pharia is the European model whose strategic position has been most directly altered by this week’s news. The combined Cohere-Aleph Alpha entity will be the most credible non-American sovereign-AI commercial player in the market. The Heidelberg roots and Schwarz Group anchor make it German-built. The Cohere integration makes it transatlantic-backed. The week’s pre-deployment review architecture makes the case for it.”
Enterprise sales · Sovereign cloud (STACKIT) · No affiliationTry it →
● Built an AI tool? Submit it for consideration
One tool featured each weekday. Editorially selected. No payment for placement.
[email protected]
● Now RunningHard Ground · Sunday Interview Series
Building an AI company? We want to tell your story.
Every Sunday, Superintelligence Europe publishes Hard Ground — long-form interviews with founders and CEOs building in AI, anywhere in the world. The only requirement is that your story is interesting. These are not press release features. They are honest conversations about building.
For founders & CEOs
Reach out if you want your story told in an upcoming Sunday edition. Global scope, any stage, any geography.
Built an AI tool?
Submit for our daily AI Tool of the Day. Editorially selected. We do not charge for features.
● Editorial policy
Both Hard Ground and the AI Tool of the Day are 100% editorial — not sponsored, not paid, not affiliated.

One email for both: [email protected]
Watch · The Days Ahead
May 6–8
STOCKHOLM
Data Innovation Summit 2026 · Stockholm — The Nordic flagship data and AI conference. Days 2 and 3 today and tomorrow. Strong representation from Swedish enterprise AI sector.
May 11
LONDON
NHS England GitHub lockdown deadline — Hundreds of public repositories close. Watch for follow-on directives from other UK public-sector institutions citing similar risk reasoning.

This billionaire strategy beat the S&P 500 by 3.1x from 2017-2025. Here's how to get in.

That’s not by picking the right stock or timing the market. It’s by holding three real asset classes in one strategy.

It’s anchored by an investment typically exclusive to billionaires.

Could be good timing too–

Bloomberg's Marcus Ashworth recently wrote, there’s "no more reliable safe havens."

The S&P, while hovering over 5 year highs, fell over 7% from the February peak. Bonds might carry less risk but they are barely keeping pace with inflation.

The things supposed to protect your portfolio started moving together.

Meanwhile, the world's wealthiest have been setting records in postwar and contemporary art.

After the dot-com bust, it grew roughly 24% annually for a decade. After 2008, roughly 11% annually for 12 years.

It trades globally in multiple currencies, has scarce supply, and has shown near-zero correlation to equities since 1995.*

Masterworks has helped 70,000+ investors allocate $1.3B fractionally across 500+ artworks featuring Banksy, Basquiat, and Picasso.

See if you can improve your portfolio performance in one diversified strategy.

*According to Masterworks data. Investing involves risk. Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd