- Superintelligence
- Posts
- Superintelligence Europe — No. 012
Superintelligence Europe — No. 012
Britain formally courts Anthropic for London. The Turing Institute is told to transform or lose funding. Hungary's AI deepfakes peak with a week to election. 78% of firms still unprepared for the AI Act. Plus the week's top European funding.

Everything that moved in European AI on Sunday 5 April · UK · Hungary · Germany · EU · Week in Review | |||||||||||||||||||
| |||||||||||||||||||
Issue No. 012 — Monday, 6 April 2026 Sunday’s dominant story broke in the Financial Times: Britain is formally courting Anthropic for a major London expansion and potential dual stock listing — with proposals to be presented to CEO Dario Amodei during his late May visit, as the UK moves to capitalise on Anthropic’s public standoff with the US Defense Department. Elsewhere, the Alan Turing Institute — the UK’s national AI institute — was formally found “not yet satisfactory” by its main funder and told to fundamentally reorient toward national security and defence. In Hungary, with one week to election day, AI-generated deepfakes and synthetic political videos reached saturation point across social media. A Zagreb-based compliance firm put numbers on the AI Act’s readiness gap: 78 percent of enterprises still unprepared. And the TNW weekly recap closed out the most significant funding week in European AI this year. Five stories and a week-in-review. Your Monday morning briefing starts here. | |||||||||||||||||||
| |||||||||||||||||||
Lead · United Kingdom · AI Policy / Geopolitics 01Britain formally courts Anthropic for London expansion and a potential dual stock listing — capitalising on the AI company’s Pentagon standoff and its path to IPO The UK’s Department for Science, Innovation and Technology is circulating proposals designed to persuade Anthropic — the San Francisco-based AI company behind the Claude chatbot — to significantly expand its London presence and potentially pursue a dual stock listing on the London Stock Exchange. The plans will be presented to CEO Dario Amodei during his visit to the UK in late May, when he is scheduled to meet European customers and policymakers. Prime Minister Keir Starmer’s office has backed the effort. London Mayor Sadiq Khan has separately written to Amodei urging the company to commit more deeply to the city, calling London a “stable, proportionate and pro-innovation environment.” The UK move is a direct play on Anthropic’s highly public dispute with the US Defense Department. The Pentagon blacklisted Anthropic as a national-security supply-chain risk after the company refused to allow its Claude AI to be used for US military surveillance operations or autonomous weapons. A San Francisco federal judge temporarily blocked the blacklisting, and Anthropic has a second lawsuit pending. The company — backed by Amazon and Alphabet, valued at around $380 billion — already employs approximately 200 people in the UK, including researchers, and counts former Prime Minister Rishi Sunak as a senior adviser. It is in preliminary discussions with Goldman Sachs, JPMorgan Chase, and Morgan Stanley about underwriting an IPO as early as October 2026. One person familiar with the DSIT proposals described a dual UK-US listing as “the dream,” while acknowledging it as unlikely. Business Secretary Peter Kyle confirmed to the FT that Anthropic is among the fast-growing companies he wants to invest more in the UK. “I believe that London can provide a stable, proportionate and pro-innovation environment in which this kind of AI can flourish.” — Sadiq Khan, Mayor of London · Letter to Dario Amodei, CEO of Anthropic
| |||||||||||||||||||
UK Policy · United Kingdom · AI Research Institutions 02UKRI declares the Alan Turing Institute “not yet satisfactory” — demanding a pivot to national security and defence, with a September 2026 delivery deadline UK Research and Innovation published the results of its independent midterm review of the Alan Turing Institute — the UK’s national institute for data science and artificial intelligence — and the verdict is sharp: overall strategic alignment and value for money are “not yet satisfactory.” The review, while acknowledging the institute’s scientific excellence and strong research foundations, found that it has failed to articulate a clear strategic purpose and has spread its resources too thinly across fragmented academic interests overlapping with other public institutions. UKRI is now demanding significant structural change and has set a September 2026 deadline for the institute to submit a transformation plan, which will then be independently assessed. The recommended new direction is unambiguous: a “clear, single purpose focused on national resilience, security and defence.” This is the culmination of pressure that has been building since July 2025, when Technology Secretary Peter Kyle demanded an overhaul and called for the institute to focus on defence and security rather than its broader academic mandate. The chair, Doug Gurr, resigned on 1 April 2026 to become chair of the Competition and Markets Authority. His interim replacement is Vanessa Lawrence. New CEO George Williamson — previously head of His Majesty’s Government Communications Centre — starts in May 2026, deepening the defence pivot. UKRI’s funding of £100 million to the institute is explicitly conditional on the September plan being delivered. What This Means for European AI Research The Turing Institute was established in 2015 as a broad national centre for AI and data science research. Its pivot to a defence-and-security mandate represents a significant shift in how the UK government views the purpose of publicly funded AI research: less academic breadth, more strategic application. The institute’s flagship defence work — including Project Bluebird, building AI to manage live UK airspace — is cited as the one unambiguous success the review acknowledged. The broader research portfolio is now effectively under review. For European peers watching how national AI institutes define their missions in 2026, the Turing’s restructuring is the clearest signal yet that “national AI strategy” and “defence AI strategy” are converging into a single category. | |||||||||||||||||||
Disinformation · Hungary · AI + Democratic Integrity 03Seven days from Hungary’s election, AI-generated deepfakes have reached saturation. Fidesz’s synthetic video campaign is now Europe’s most documented case of AI-powered election interference. Sources: BBC · Reuters/Complete AI Training · Carnegie Endowment · EU DisinfoLab · Ongoing / 5 April 2026 With Hungary’s parliamentary election on April 12, the country is now one week from what Carnegie Endowment researchers describe as the most consequential vote in fifteen years of Orbán rule — and Europe’s first election in which AI-generated synthetic media has been deployed at industrial scale. Prime Minister Viktor Orbán’s ruling Fidesz party published an AI-generated video in February depicting a Hungarian soldier in uniform, kneeling blindfolded on a battlefield, being executed — captioned to suggest this is the future if opposition leader Péter Magyar wins and is “dragged into the Ukraine war.” Orbán’s chief of staff, Gergely Gulyás, did not deny the video was AI-generated when asked at a press briefing. It was confirmed by Reuters to have been made using Google’s AI models. That video is not isolated. The pro-Orbán National Resistance Movement has spent over €1.5 million on unlabelled AI-generated videos targeting Magyar on TikTok, Facebook, and Instagram. A separate AI-generated video depicted a fake phone call between European Commission President Ursula von der Leyen and Magyar, ostensibly discussing financial aid to Ukraine. EU DisinfoLab has documented a coordinated foreign information manipulation campaign on TikTok linked to Russia’s “Matryoshka” operation, using synthetic news anchors and deepfake celebrity endorsements to amplify pro-Orbán narratives. Despite the campaign, Magyar’s Tisza party leads Fidesz by 8–12 points in most polls. His posts on social media receive twice the engagement of Orbán’s. The AI Act’s Real-World Test Hungary is now the most live test of whether the EU’s existing digital regulations — the Digital Services Act, the AI Act’s transparency requirements, and the EU’s Rapid Alert System — can respond to AI-driven election interference in real time. The answer emerging from this election is: not yet. Enforcement is improving but the volume and speed of AI-generated synthetic media are outpacing institutional response capacity. The AI Act’s watermarking obligations for AI-generated content — which require visible labelling — are not yet in force and would not have prevented unlabelled synthetic media from circulating. This is the regulatory gap that April’s Omnibus trilogue must address. What Happens After April 12 If Fidesz wins, researchers at Political Capital predict the AI disinformation infrastructure will remain in place — normalised and expanded for use beyond the election. If Magyar wins, Hungary becomes a test case for whether a government that has spent two years building AI propaganda capacity can be dismantled by its successor. Either way, the Matryoshka network will likely pivot to the next European electoral target. This briefing will track April 12 and its immediate aftermath. | |||||||||||||||||||
Regulation · EU-wide · AI Act Compliance 0478% of European enterprises have taken no meaningful steps toward AI Act compliance — with August 2026 enforcement approaching and the Omnibus clock running Sources: Vision Compliance / National Law Review · Published 1 April 2026 · Covered 5 April 2026 Vision Compliance — a Zagreb-based European regulatory advisory firm — published its 2026 EU AI Act Readiness Analysis on April 1, with full coverage circulating across European policy and compliance communities on Sunday. The headline finding from assessments conducted across financial services, healthcare, technology, manufacturing, energy, retail, telecommunications, and transport: 78 percent of organisations have not taken meaningful steps toward AI Act compliance. The most common failure mode is awareness without action — companies know the regulation exists, but very few understand what it actually requires at the operational level. The compliance gaps Vision Compliance documents are structural, not superficial. Most organisations lack AI system inventories entirely: without knowing which AI systems they operate, risk classification is impossible. The AI Act introduces requirements that go significantly beyond data protection — including conformity assessment procedures, post-market monitoring obligations, and technical documentation standards — that are entirely new territory for even GDPR-mature compliance teams. The report notes that GDPR-compliant organisations are better positioned on data governance, but that advantage does not extend to the Act’s new requirements. One finding stands out: the August 2, 2026 enforcement deadline for high-risk AI systems is approaching whether the Digital Omnibus’s proposed extensions are agreed by trilogue or not. Organisations that assume the extension will materialise are taking regulatory risk.
| |||||||||||||||||||
Week in Review · EU-wide · Top Funding 30 March – 5 April 2026 05The most significant funding week in European AI this year: anchored by Mistral, closed by a UK unicorn and a German TrustTech round TNW’s Editor-in-Chief framed the week as evidence of a single structural instinct running through European capital: “build the infrastructure layer first.” The week ran from Mistral’s $830M debt raise at one end to a €1.1M workpod pre-seed at the other — a useful reminder of how wide the band of European ambition now runs. The confirmed highlights:
| |||||||||||||||||||
Signal · Verified Voices — Sunday, 5 April Credible accounts and publications driving Sunday’s European AI conversation. Filtered for genuine signal.
| |||||||||||||||||||
AI Agents Are Reading Your Docs. Are You Ready?
Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.
Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.
This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.
Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.
That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype
In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.

