Superintelligence Europe — No. 012

Britain formally courts Anthropic for London. The Turing Institute is told to transform or lose funding. Hungary's AI deepfakes peak with a week to election. 78% of firms still unprepared for the AI Act. Plus the week's top European funding.

In partnership with

Superintelligence Europe — Briefing No. 012 — Monday, 6 April 2026
Superintelligence
Europe Daily Briefing
No. 012
Monday, 6 April 2026
06:00 CET
Everything that moved in European AI on Sunday 5 April  ·  UK · Hungary · Germany · EU · Week in Review
200
Anthropic employees in UK — the base UK wants to build from
7 days
to Hungary’s election — as AI deepfakes flood the campaign
78%
of enterprises not yet compliant with EU AI Act obligations
£100M
Turing Institute funding under review — UKRI demands transformation
Issue No. 012 — Monday, 6 April 2026

Sunday’s dominant story broke in the Financial Times: Britain is formally courting Anthropic for a major London expansion and potential dual stock listing — with proposals to be presented to CEO Dario Amodei during his late May visit, as the UK moves to capitalise on Anthropic’s public standoff with the US Defense Department. Elsewhere, the Alan Turing Institute — the UK’s national AI institute — was formally found “not yet satisfactory” by its main funder and told to fundamentally reorient toward national security and defence. In Hungary, with one week to election day, AI-generated deepfakes and synthetic political videos reached saturation point across social media. A Zagreb-based compliance firm put numbers on the AI Act’s readiness gap: 78 percent of enterprises still unprepared. And the TNW weekly recap closed out the most significant funding week in European AI this year.

Five stories and a week-in-review. Your Monday morning briefing starts here.

Monday’s Briefing
01🇬🇧 UK courts Anthropic — London expansion + dual listing proposal
02🇬🇧 Alan Turing Institute — UKRI finds “not yet satisfactory”
03🇭🇺 Hungary election: AI deepfakes hit saturation, 7 days out
 
04🇪🇺 Vision Compliance: 78% of firms unprepared for AI Act
05Week in Review — top European AI funding 30 Mar–5 Apr
Lead · United Kingdom · AI Policy / Geopolitics
01
Britain formally courts Anthropic for London expansion and a potential dual stock listing — capitalising on the AI company’s Pentagon standoff and its path to IPO
Sources: Financial Times · Reuters · City AM · Benzinga · 5 April 2026

The UK’s Department for Science, Innovation and Technology is circulating proposals designed to persuade Anthropic — the San Francisco-based AI company behind the Claude chatbot — to significantly expand its London presence and potentially pursue a dual stock listing on the London Stock Exchange. The plans will be presented to CEO Dario Amodei during his visit to the UK in late May, when he is scheduled to meet European customers and policymakers. Prime Minister Keir Starmer’s office has backed the effort. London Mayor Sadiq Khan has separately written to Amodei urging the company to commit more deeply to the city, calling London a “stable, proportionate and pro-innovation environment.”

The UK move is a direct play on Anthropic’s highly public dispute with the US Defense Department. The Pentagon blacklisted Anthropic as a national-security supply-chain risk after the company refused to allow its Claude AI to be used for US military surveillance operations or autonomous weapons. A San Francisco federal judge temporarily blocked the blacklisting, and Anthropic has a second lawsuit pending. The company — backed by Amazon and Alphabet, valued at around $380 billion — already employs approximately 200 people in the UK, including researchers, and counts former Prime Minister Rishi Sunak as a senior adviser. It is in preliminary discussions with Goldman Sachs, JPMorgan Chase, and Morgan Stanley about underwriting an IPO as early as October 2026. One person familiar with the DSIT proposals described a dual UK-US listing as “the dream,” while acknowledging it as unlikely. Business Secretary Peter Kyle confirmed to the FT that Anthropic is among the fast-growing companies he wants to invest more in the UK.

“I believe that London can provide a stable, proportionate and pro-innovation environment in which this kind of AI can flourish.”

— Sadiq Khan, Mayor of London · Letter to Dario Amodei, CEO of Anthropic
Why the Pentagon Dispute Matters for Europe

Anthropic’s refusal to allow Claude to be used for military surveillance or autonomous weapons — maintaining what it calls “red lines” — is precisely the safety-first governance position that European regulators have been pushing for under the AI Act. The UK is positioning itself as the obvious landing zone for an AI company that prioritises ethical guardrails over maximum government cooperation. That framing is as much a message to Brussels as it is to Amodei.

The IPO Dimension

Anthropic’s preliminary IPO discussions — targeting October 2026 — put the UK’s interest in a dual listing in a specific context. The London Stock Exchange has failed to attract any major AI company to list in the UK. A dual Anthropic listing would be transformational for UK capital markets. It would also directly mirror the IQM Finland story: European-leaning AI companies using US capital markets while signalling alignment with European governance values.

UK Policy · United Kingdom · AI Research Institutions
02
UKRI declares the Alan Turing Institute “not yet satisfactory” — demanding a pivot to national security and defence, with a September 2026 delivery deadline
Sources: UKRI official · UKTN · digit.fyi · Early April 2026

UK Research and Innovation published the results of its independent midterm review of the Alan Turing Institute — the UK’s national institute for data science and artificial intelligence — and the verdict is sharp: overall strategic alignment and value for money are “not yet satisfactory.” The review, while acknowledging the institute’s scientific excellence and strong research foundations, found that it has failed to articulate a clear strategic purpose and has spread its resources too thinly across fragmented academic interests overlapping with other public institutions. UKRI is now demanding significant structural change and has set a September 2026 deadline for the institute to submit a transformation plan, which will then be independently assessed.

The recommended new direction is unambiguous: a “clear, single purpose focused on national resilience, security and defence.” This is the culmination of pressure that has been building since July 2025, when Technology Secretary Peter Kyle demanded an overhaul and called for the institute to focus on defence and security rather than its broader academic mandate. The chair, Doug Gurr, resigned on 1 April 2026 to become chair of the Competition and Markets Authority. His interim replacement is Vanessa Lawrence. New CEO George Williamson — previously head of His Majesty’s Government Communications Centre — starts in May 2026, deepening the defence pivot. UKRI’s funding of £100 million to the institute is explicitly conditional on the September plan being delivered.

What This Means for European AI Research

The Turing Institute was established in 2015 as a broad national centre for AI and data science research. Its pivot to a defence-and-security mandate represents a significant shift in how the UK government views the purpose of publicly funded AI research: less academic breadth, more strategic application. The institute’s flagship defence work — including Project Bluebird, building AI to manage live UK airspace — is cited as the one unambiguous success the review acknowledged. The broader research portfolio is now effectively under review. For European peers watching how national AI institutes define their missions in 2026, the Turing’s restructuring is the clearest signal yet that “national AI strategy” and “defence AI strategy” are converging into a single category.

Disinformation · Hungary · AI + Democratic Integrity
03
Seven days from Hungary’s election, AI-generated deepfakes have reached saturation. Fidesz’s synthetic video campaign is now Europe’s most documented case of AI-powered election interference.
Sources: BBC · Reuters/Complete AI Training · Carnegie Endowment · EU DisinfoLab · Ongoing / 5 April 2026

With Hungary’s parliamentary election on April 12, the country is now one week from what Carnegie Endowment researchers describe as the most consequential vote in fifteen years of Orbán rule — and Europe’s first election in which AI-generated synthetic media has been deployed at industrial scale. Prime Minister Viktor Orbán’s ruling Fidesz party published an AI-generated video in February depicting a Hungarian soldier in uniform, kneeling blindfolded on a battlefield, being executed — captioned to suggest this is the future if opposition leader Péter Magyar wins and is “dragged into the Ukraine war.” Orbán’s chief of staff, Gergely Gulyás, did not deny the video was AI-generated when asked at a press briefing. It was confirmed by Reuters to have been made using Google’s AI models.

That video is not isolated. The pro-Orbán National Resistance Movement has spent over €1.5 million on unlabelled AI-generated videos targeting Magyar on TikTok, Facebook, and Instagram. A separate AI-generated video depicted a fake phone call between European Commission President Ursula von der Leyen and Magyar, ostensibly discussing financial aid to Ukraine. EU DisinfoLab has documented a coordinated foreign information manipulation campaign on TikTok linked to Russia’s “Matryoshka” operation, using synthetic news anchors and deepfake celebrity endorsements to amplify pro-Orbán narratives. Despite the campaign, Magyar’s Tisza party leads Fidesz by 8–12 points in most polls. His posts on social media receive twice the engagement of Orbán’s.

The AI Act’s Real-World Test

Hungary is now the most live test of whether the EU’s existing digital regulations — the Digital Services Act, the AI Act’s transparency requirements, and the EU’s Rapid Alert System — can respond to AI-driven election interference in real time. The answer emerging from this election is: not yet. Enforcement is improving but the volume and speed of AI-generated synthetic media are outpacing institutional response capacity. The AI Act’s watermarking obligations for AI-generated content — which require visible labelling — are not yet in force and would not have prevented unlabelled synthetic media from circulating. This is the regulatory gap that April’s Omnibus trilogue must address.

What Happens After April 12

If Fidesz wins, researchers at Political Capital predict the AI disinformation infrastructure will remain in place — normalised and expanded for use beyond the election. If Magyar wins, Hungary becomes a test case for whether a government that has spent two years building AI propaganda capacity can be dismantled by its successor. Either way, the Matryoshka network will likely pivot to the next European electoral target. This briefing will track April 12 and its immediate aftermath.

Regulation · EU-wide · AI Act Compliance
04
78% of European enterprises have taken no meaningful steps toward AI Act compliance — with August 2026 enforcement approaching and the Omnibus clock running
Sources: Vision Compliance / National Law Review · Published 1 April 2026 · Covered 5 April 2026

Vision Compliance — a Zagreb-based European regulatory advisory firm — published its 2026 EU AI Act Readiness Analysis on April 1, with full coverage circulating across European policy and compliance communities on Sunday. The headline finding from assessments conducted across financial services, healthcare, technology, manufacturing, energy, retail, telecommunications, and transport: 78 percent of organisations have not taken meaningful steps toward AI Act compliance. The most common failure mode is awareness without action — companies know the regulation exists, but very few understand what it actually requires at the operational level.

The compliance gaps Vision Compliance documents are structural, not superficial. Most organisations lack AI system inventories entirely: without knowing which AI systems they operate, risk classification is impossible. The AI Act introduces requirements that go significantly beyond data protection — including conformity assessment procedures, post-market monitoring obligations, and technical documentation standards — that are entirely new territory for even GDPR-mature compliance teams. The report notes that GDPR-compliant organisations are better positioned on data governance, but that advantage does not extend to the Act’s new requirements. One finding stands out: the August 2, 2026 enforcement deadline for high-risk AI systems is approaching whether the Digital Omnibus’s proposed extensions are agreed by trilogue or not. Organisations that assume the extension will materialise are taking regulatory risk.

The Compliance Gap in Numbers

78% of firms: no meaningful steps toward compliance. Most common gap: no AI system inventory. Second most common: treating AI like traditional software. Third: no documentation infrastructure for Annex IV conformity assessments. August 2, 2026: the date when high-risk obligations come into force if the Omnibus is not agreed in time. April 28: the target date for that Omnibus agreement. 23 days between now and the decision that determines whether organisations get more time.

Vision Compliance’s Context

The firm is based in Zagreb, Croatia — the same city where the EU’s first licensed commercial robotaxi launched in February. Vision Compliance specialises in AI Act compliance advisory from initial risk classification through governance framework implementation. Its finding that GDPR experience provides a partial but insufficient compliance baseline is the most practically useful insight for European compliance teams currently assessing their readiness position.

Week in Review · EU-wide · Top Funding 30 March – 5 April 2026
05
The most significant funding week in European AI this year: anchored by Mistral, closed by a UK unicorn and a German TrustTech round

TNW’s Editor-in-Chief framed the week as evidence of a single structural instinct running through European capital: “build the infrastructure layer first.” The week ran from Mistral’s $830M debt raise at one end to a €1.1M workpod pre-seed at the other — a useful reminder of how wide the band of European ambition now runs. The confirmed highlights:

Top Verified Funding · 30 March – 5 April 2026
🇫🇷
Mistral AI
$830M debt raise
Paris, France · 30 March

First-ever debt raise to fund 13,800 Nvidia GB300 GPUs for a data centre at Bruyères-le-Châtel, south of Paris. Operational Q2 2026. Seven banks including BNP Paribas, Crédit Agricole, and HSBC. Europe’s most capitalised LLM company; $2.9B total funding; $1B ARR target by year end. Anchor story of the week.

🇫🇮
IQM Quantum Computers
€50M from BlackRock
Espoo, Finland · 30 March

Aalto University spin-out securing financing from the world’s largest asset manager ahead of a planned US SPAC listing at a $1.8B implied valuation, with a dual Helsinki listing under consideration. Europe’s quantum IPO candidate.

🇬🇧
9fin
$170M Series C · Unicorn
London, UK · 31 March

AI-native debt market intelligence platform, valued at $1.3B. Led by HarbourVest, with CPP Investments, Redalpine, and Seedcamp. Over 300 banks, asset managers, law firms. 100% ARR growth for multiple consecutive years. British Business Bank invested $20M. Modernising the $145 trillion global debt market with AI.

🇫🇷
Kestra
$25M Series A
Paris, France · Week of 30 Mar

Open-source orchestration platform for data, AI, infrastructure, and business workflows. Led by RTP Global. Enterprise revenue 25x in 18 months. More than 2 billion workflows executed in 2025. 30,000+ organisations. Total funding €36M.

🇩🇪
Penemue
€1.7M · TrustTech AI
Freiburg, Germany · 2–3 April

AI platform detecting hate speech, digital violence, and disinformation across 89 languages in real time. Customers include Bundesliga clubs, federal politicians, police, and public prosecutors. The Hungary AI deepfake story and Penemue’s raise arrive in the same week — a coincidence that tells its own story about where European AI investment is going.

Signal · Verified Voices — Sunday, 5 April

Credible accounts and publications driving Sunday’s European AI conversation. Filtered for genuine signal.

@FinancialTimes
FT Exclusive · UK / Anthropic

The Anthropic story broke in the FT on Sunday morning and was picked up within hours by Reuters, City AM, Benzinga, and numerous policy-focused newsletters. The framing that travelled furthest: London positioning itself as an alternative AI hub for companies that value “ethical guardrails” — explicitly contrasting the city’s governance environment with Washington’s. The story will shape the Amodei visit conversation in late May.

@UKRI_News
UKRI · Alan Turing Institute

The Turing review’s “not yet satisfactory” verdict generated significant discussion among UK AI researchers and academics who see the institute’s pivot to defence as a narrowing of publicly funded AI research’s scope. The contrast with the Anthropic story is sharp: on the same Sunday, the UK government is simultaneously courting a private AI company that rejected military applications and restructuring its own public AI institute around defence as its primary mandate.

@BBCNews / @EUDisinfoLab
Hungary · AI Deepfakes

The BBC’s coverage of AI deepfakes in the Hungarian election ran alongside EU DisinfoLab’s documentation of the Matryoshka coordinated campaign on TikTok. The editorial resonance on Sunday was clear: European AI is simultaneously being used to build sovereign compute, detect hate speech in 89 languages, and execute the continent’s most sophisticated AI-powered disinformation campaign in a single election cycle. All three are real. All three are happening in the same week.

AI Agents Are Reading Your Docs. Are You Ready?

Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.

Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.

This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.

Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.

That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype

In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.