• Superintelligence
  • Posts
  • Omnibus weekend read. Europe Day. NHS GitHub goes private today.

Omnibus weekend read. Europe Day. NHS GitHub goes private today.

The Council and Parliament reached provisional agreement on the Digital Omnibus on AI at 4:30am on Wednesday. The weekend was when industry parsed what was actually signed. Annex III obligations move to 2 December 2027. Annex I to 2 August 2028. Brussels opened the Berlaymont for Europe Day. IEEE CAI 2026 closed Sunday in Granada. NHS England's GitHub lockdown goes live today.

In partnership with

Superintelligence Europe 036
● Vol. I / Issue 036Wed 13 May 2026
Superintelligence
Europe
Four signals from May 12 · Article 50 in legal review · Android DMA deadline today · $2.6B to three labs · Children online
● Published Wed 13 May 2026 · 06:00 CET · Covering events of Tue 12 May 2026
Architecture
Arriving fast.
Not yet European.
Article 50 transparency guidelines went into substantive legal review on Tuesday. The Android DMA consultation closes today. Three London-Paris frontier labs raised $2.6 billion in 2026 alone. The European AI architecture is being built. The capital, the IP, and the institutions are still arriving from outside.
—  The Signal · Editor’s Note
The week the architecture began arriving — and Europe still does not own a layer of it
Tuesday was a layering day. The Commission’s Article 50 transparency draft guidelines, published 8 May, entered serious legal review with Covington & Burling publishing the first major ten-point analysis on Tuesday morning — the operational rulebook for chatbots, deepfakes, and AI-generated public-interest content that becomes binding from 2 August. The Google Android DMA consultation closes today, with Teresa Ribera’s case file proposing that third-party AI services receive the same Android access Google reserves for Gemini. And Crunchbase data published on Tuesday confirmed what European AI watchers have suspected for weeks: half of European venture funding in 2026 is now flowing to AI, and three London-Paris frontier labs founded by ex-DeepMind and ex-Meta researchers have raised more than $2.6 billion in 2026 alone. The European AI architecture is finally being built in earnest. The institutions are European. The capital sources, the corporate parents, and the executive talent flow patterns are still substantially not. The architecture is arriving. The question is who finishes it.
LeadArticle 50Transparency
01
Brussels · Commission Article 50 draft guidelines · Legal review · 12 May
The Article 50 transparency rulebook hit substantive legal review on Tuesday — and what it says will shape how every chatbot in Europe is built
On 8 May the Commission published draft guidelines for the AI Act’s transparency obligations. On Tuesday, Covington & Burling published the first major legal analysis. The guidelines arrive less than twelve weeks before the 2 August deadline. Inform users when they interact with AI. Mark and detect generated content. Label deepfakes and AI-generated public-interest publications. Disclose biometric categorisation. The whole layer is now operational.

On 8 May 2026, the European Commission published draft guidelines on the implementation of the transparency obligations under Article 50 of the AI Act, opening a targeted consultation that runs until 3 June 2026. The guidelines are non-binding, but they are the first Commission instrument to provide interpretive guidance across the full scope of Article 50, prepared in parallel with the second draft of the Code of Practice on Transparency of AI-Generated Content published on 5 March. On Tuesday 12 May, the international law firm Covington & Burling published a detailed ten-takeaway analysis on its Inside Global Tech blog — the first substantive industry legal review of what the document actually means in operational practice.

The substance is significant. From 2 August 2026, providers of AI systems intended to interact with natural persons must inform users that they are interacting with AI — unless this is “obvious from the point of view of a reasonably well-informed natural person, taking into account the circumstances and the context of use.” Providers of generative AI must implement machine-readable marking of synthetic audio, image, video and text content. Deployers must disclose deepfakes, AI-generated public-interest publications, and the use of emotion recognition or biometric categorisation systems. The Omnibus deal struck on 7 May grants a transitional period until 2 December 2026 for systems already on the EU market before 2 August; systems placed on the market from 2 August onwards must comply from that date.

The Covington analysis identifies the practical pressure points. The Code of Practice addresses only the marking obligations under Article 50(2) and the deepfake-labelling obligations under Article 50(4) — the Guidelines cover everything else. Operators of conversational systems will need to determine what counts as “obvious” AI interaction in their context. Deployers of generative AI will need to operationalise machine-readable marking that survives downstream editing. Companies will need to map their AI inventory against the four distinct transparency obligation streams (50(1), 50(2), 50(3), 50(4)). With under twelve weeks to the deadline, no certified standards, and a Code of Practice not finalised until June, the operational uncertainty is real. For European companies, the practical effect is that the transparency layer of the AI Act is now the layer arriving fastest. The high-risk Annex III deadline has moved to 2 December 2027. Transparency is operational this summer.

Aug 2
2026
Article 50 transparency obligations operative · 82 days from today
Consultation closes 3 June. Code of Practice final in June. Omnibus transitional period to 2 December 2026 for existing systems if Omnibus is formally adopted before 2 August.
Covington & Burling · Inside Global Tech · 12 May 2026
“The Guidelines arrive less than three months before the Article 50 transparency obligations become applicable. They are the first Commission instrument to provide interpretive guidance across the full scope of Article 50.”
BrusselsDMAAndroid
02
Brussels · Google Android DMA · Case DMA.100220 · Deadline today
Tonight is the deadline for the Android DMA consultation — on whether third-party AI assistants get the same Android access Google reserves for Gemini
Wake words. Contextual data. Background execution. System-wide access points. The Commission’s draft measures would force Google to grant rival AI services the same Android privileges Google grants itself. The consultation closes today. The final binding decision lands by 27 July 2026.
✓ VERIFIED  EC consultation page (primary) · EC press release · 27 April–13 May

Today, 13 May, is the deadline for interested parties to submit feedback on the European Commission’s draft measures in case DMA.100220 — the specification proceedings on Google Android interoperability with third-party AI services under the Digital Markets Act. The Commission opened proceedings on 27 January 2026. Preliminary findings were addressed to Alphabet on 27 April. The Commission must adopt its final binding decision by 27 July 2026 — within six months of opening the proceedings.

The substance is operationally significant. The Commission’s draft measures cover four main themes. Wake words: third-party AI apps would be able to be invoked by a custom voice phrase at any time, even when the screen is locked, with minimal battery impact. System-wide access points: long-pressing the home button or navigation handle would invoke third-party AI services, with those services receiving contextual data to enable functions like translating on-screen text or searching for information on screen — capabilities currently reserved for Google’s Circle to Search. Effective interaction with apps: third-party AI services would be able to send emails using the user’s preferred email app, order food, or share photos — the agentic functions Google reserves for Gemini today. Hardware and software resources: access to background execution, memory, and other system resources to ensure reliability and responsiveness.

Executive Vice-President for a Clean, Just and Competitive Transition Teresa Ribera framed the case directly when the preliminary findings were published in April: “AI services are becoming more and more relevant for EU citizens’ daily interaction with their mobile devices.” The case is, in practice, the single most consequential AI-platform competition file in Europe right now. If adopted as drafted, the measures would materially restructure the competitive position of every non-Google AI assistant operating in the EU smartphone market — particularly Mistral’s Le Chat, OpenAI’s ChatGPT, and Anthropic’s Claude. Final binding measures take effect on 27 July. The clock is now running in days, not months.

27 Jul
2026
Final binding Commission decision deadline · DMA.100220
Wake words, system access points, background execution, agentic interaction. Affects every non-Google AI assistant in EU mobile market.
Teresa Ribera · EVP, Clean, Just and Competitive Transition · 27 April
“AI services are becoming more and more relevant for EU citizens’ daily interaction with their mobile devices.”
London · ParisFrontier Labs$2.6B
03
London · Paris · European frontier labs · Crunchbase data · 12 May
Half of European VC in 2026 is going to AI — and three labs in London and Paris have raised $2.6 billion of it alone
Ineffable Intelligence (London, ex-DeepMind) raised $1.1B at a $5.1B valuation. Advanced Machine Intelligence Labs (Paris, ex-Meta) raised $1.03B. Recursive Superintelligence (London, ex-DeepMind) is closing $500M-$1B. The European frontier-lab generation is being capitalised at scale. The capital sources are still substantially American.
✓ VERIFIED  Crunchbase News (primary) · TechCrunch · CNBC · April–May

According to Crunchbase data published Tuesday, roughly half of European venture funding in 2026 to date has been in AI-related companies. Total European startup funding reached over $17 billion in Q4 2025 and again in Q1 2026 — roughly a third higher year-on-year. The most striking concentration is at the frontier-lab tier. Three new labs founded by senior researchers departing Big Tech have, together, raised more than $2.6 billion in 2026 alone, anchored in two European cities.

Ineffable Intelligence, founded in London by former Google DeepMind principal scientist David Silver, raised $1.1 billion in seed funding at a $5.1 billion valuation in late April. Backers include Sequoia, Lightspeed, Nvidia, and Google. Silver’s pitch is reinforcement learning at frontier scale — AI systems that learn from experience rather than human data, training a “superlearner” that develops knowledge through self-play and direct interaction with environments. UK Science and Technology Secretary Liz Kendall framed the round directly: it underlines the UK’s determination “to ensure that the UK isn’t just an AI taker but an AI maker.”

Advanced Machine Intelligence Labs, founded in Paris by former Meta Chief AI Scientist and ACM Turing Award winner Yann LeCun, raised $1.03 billion in March 2026 at a $3.5 billion pre-money valuation. Backers include Bezos Expeditions, Nvidia, and Samsung Electronics. CEO Alexandre LeBrun (founder of Paris health-tech Nabla, former Facebook AI research engineer). AMI’s pitch: world models — AI systems that understand the physical world, maintain long-term memory, and strategise complicated tasks. LeCun’s framing in Reuters: current LLM approaches based on predicting the next word or pixel will not produce broadly capable intelligent agents.

Recursive Superintelligence, founded in London by former DeepMind principal scientist Tim Rocktäschel, was reported in late April to be closing a round of approximately $500 million with capacity for up to $1 billion. The pattern across all three is consistent: founders from London and Paris, technical missions distinct from the dominant LLM paradigm, and investor bases dominated by US venture capital, Big Tech corporate investors, and Asian strategic backers. The labs are European in their geography and their founding teams. The capital tables are not. For European policymakers grappling with the AI Continent Action Plan and the gigafactory programme, the funding pattern of 2026 is a structural data point: Europe can incubate frontier-lab founders. The capital architecture that sits around them is still substantially being built from outside.

50%
VC
European venture capital flowing to AI in 2026 to date
Q4 2025 + Q1 2026: $17B each. Up ~33% YoY. Three labs raised $2.6B of it. Ineffable / AMI / Recursive — all London or Paris.
Liz Kendall · UK Science & Technology Secretary · on Ineffable funding
“This investment in Ineffable will support a company at the very frontier of AI, with the potential to transform entire sectors, underlining our determination to ensure that the UK isn’t just an AI taker but an AI maker.”
Child SafetyArticle 5Eurobarometer
04
Brussels · Eurobarometer + Omnibus nudifier ban · 12 May
92% of Europeans want stronger child protection online — the only part of the Omnibus deal critics did not call a concession
The Commission’s digital strategy page on Tuesday surfaced an updated Eurobarometer signal: 92% of Europeans regard stronger online protection for children as a top policy priority. The Omnibus deal’s new Article 5 prohibition on AI-generated CSAM and non-consensual intimate imagery becomes operative on 2 December 2026. President von der Leyen’s Special Panel on Child Online Safety meets again next month.
✓ VERIFIED  EC digital strategy · EC Protect Our Children · 12 May

The Commission’s digital strategy update on Tuesday confirmed a continuing pattern in European public opinion: 92% of Europeans view the need to further strengthen children and young people’s protection online as a top policy priority, with 93% concerned about social media’s mental health impact and 92% supporting effective restrictions on access to age-inappropriate content. The 2025 Eurobarometer figures continue to anchor the political mandate behind the European Commission’s child-safety legislative pipeline through 2026.

The Omnibus deal reached on 7 May added a new Article 5 prohibition to the AI Act, banning AI systems used to generate child sexual abuse material or non-consensual intimate imagery — the so-called nudifier ban. The prohibition applies in three configurations: placing such systems on the EU market, placing them on the market without reasonable safety measures, and deployer use. Companies have until 2 December 2026 to bring affected systems into compliance. Co-rapporteur Michael McNamara (Renew Europe, Ireland), who carried the file through Parliament’s Civil Liberties Committee, described non-consensual intimate imagery as “a systemic harm being industrialised by AI” that falls overwhelmingly on women and girls.

The institutional pipeline beyond the nudifier ban is dense. President von der Leyen’s Special Panel on Child Online Safety, convened on 5 March 2026 and met again on 16 April, will hold its third meeting in June and deliver final findings to the President by summer 2026. The EU age-verification app, blueprinted on 14 July 2025 and feature-ready as of 15 April 2026, is being adopted by Member States ahead of legislative mandates. The Commission has adopted a recommendation urging Member States to accelerate rollout by the end of the year. France has approved a 15-year social media age limit. Spain, Austria, Greece, Ireland, Denmark, and the Netherlands are preparing parallel measures. The Omnibus nudifier ban is the only part of the deal that civil society organisations including AlgorithmWatch did not characterise as a concession. The political mandate behind it is also the only part of the AI policy agenda where 92% public alignment puts substantive constraint on legislative drift.

2 Dec
2026
Article 5 nudifier and CSAM prohibition operative · 204 days
Eurobarometer: 92% want stronger child protection online. Special Panel reports summer 2026. EU age-verification app feature-ready since 15 April.
Michael McNamara · LIBE co-rapporteur · 7 May 2026
“Non-consensual intimate imagery is a systemic harm being industrialised by AI, the overwhelming burden of which falls on women and girls.”
Quote of Record
“This investment in Ineffable will support a company at the very frontier of AI, with the potential to transform entire sectors, underlining our determination to ensure that the UK isn’t just an AI taker but an AI maker.”
Liz Kendall · UK Secretary of State for Science, Innovation and Technology
On Ineffable Intelligence’s $1.1bn seed round at $5.1bn valuation · April 2026
■  AI Tool of the DayDaily · Mon–Fri
DeepL
deepl.com · AI translation & writing · Cologne, Germany
Editor’s pick
9.0/10
The Cologne-built AI translation and writing platform that has quietly become Europe’s most recognised AI consumer product
DeepL is the AI translation and writing assistant from Cologne-based deepL SE, used by more than 200,000 businesses and government agencies including Deutsche Bahn, Mercedes-Benz, and the European Parliament. It now offers translation across 30+ languages with quality consistently rated above Google Translate in independent evaluations, plus a writing assistant (DeepL Write) and a fully integrated DeepL API for enterprise use. For European companies preparing for Article 50 transparency obligations, DeepL’s posture matters: GDPR-native data handling, transparent model architecture, and a clear “text generated by AI” disclosure pattern already built into its enterprise products. The most quietly successful European AI consumer product of the decade, deepL has reached enterprise-scale revenue without competing in the frontier LLM race — building instead a category-defining product in a category that genuinely needed European leadership.
Best for
Translation at scaleMultilingual contentGDPR workflowsEnterprise API
Editor’s verdict
“DeepL is the European AI product that proved a category-leading consumer experience can be built and scaled inside the EU regulatory perimeter. The Article 50 transparency framework is moving toward what DeepL has done by default for a decade. The German product is now the operational template for the legal regime.”
Free tier · Pro from €7.49/mo · API + Enterprise · No affiliationTry it →
● Built an AI tool? Submit it for consideration
One tool featured each weekday. Editorially selected. No payment for placement.
[email protected]
● Now RunningHard Ground · Sunday Interview Series
Building an AI company? We want to tell your story.
Every Sunday, Superintelligence Europe publishes Hard Ground — long-form interviews with founders and CEOs building in AI, anywhere in the world. The only requirement is that your story is interesting. These are not press release features. They are honest conversations about building.
For founders & CEOs
Reach out if you want your story told in an upcoming Sunday edition. Global scope, any stage, any geography.
Built an AI tool?
Submit for our daily AI Tool of the Day. Editorially selected. We do not charge for features.
● Editorial policy
Both Hard Ground and the AI Tool of the Day are 100% editorial — not sponsored, not paid, not affiliated.

One email for both: [email protected]
Watch · The Days Ahead

Claude is not just a chatbot anymore. Is your security team ready?

Claude.ai is one thing. Claude Cowork with MCP connections, running agentic workflows, taking actions across your data with ungoverned skills? That is a different conversation entirely, and most security teams are not equipped to govern it.

Harmonic Security is built to secure everything Claude offers. Full browser controls for Claude.ai, deep governance over agentic MCP workflows, and real-time visibility into what Claude is doing across your organization. So your CISO can say yes to the tools your business is already demanding.