Superintelligence Europe — No. 017

UK threatens tech bosses with prison over AI deepfake abuse. OpenAI faces DSA designation at 120 million EU users. EU NextGen advances AI cardiology. Stargate UK's IPO factor revealed. Hungary votes Sunday in Europe's most AI-intensive election campaign.

In partnership with

Superintelligence Europe — Briefing No. 017 — Saturday, 11 April 2026
Superintelligence
Europe Daily Briefing
No. 017
Saturday, 11 April 2026
06:00 CET
Everything that moved in European AI on Friday 10 April  ·  UK · EU · Hungary · 18 days to Omnibus · Election eve
Prison
UK tech bosses — Crime & Policing Bill — non-consensual intimate images
120M
ChatGPT monthly EU users — 2.7× DSA threshold — designation review open
48 hrs
To Hungary election — AI disinformation at peak saturation
€282B
CVD cost to EU annually — NextGen AI cardiology project advances
Issue No. 017 — Saturday, 11 April 2026

Friday was a five-story day. The UK government announced that senior technology executives could face prison sentences if their platforms fail to remove non-consensual intimate images after Ofcom enforcement — the sharpest escalation of platform accountability law in Britain since the Online Safety Act passed. From Brussels, Reuters and Handelsblatt reported that OpenAI is set to be designated as a Very Large Online Search Engine under the Digital Services Act — a classification that, if confirmed, would bring ChatGPT under the EU’s toughest regulatory tier. Both stories landed on the same Friday.

Elsewhere: the European Society of Cardiology published a confirmed April 10 press release on the EU NextGen project — a Horizon Europe-funded initiative integrating genomic and clinical data into AI cardiovascular models, with real-world pilots at five clinical sites. Hungary votes Sunday in a campaign documented by independent researchers as Europe’s most AI-intensive political disinformation exercise to date. And Stargate UK’s deeper story — the grid queue, the IPO factor, parliamentary reactions — crystallised in Friday’s analysis coverage.

Five stories. One Saturday morning. Your weekend briefing starts here.

Friday’s Briefing
01🇬🇧 UK tech bosses face prison — Crime & Policing Bill amendment
02🇪🇺 OpenAI faces DSA designation — 120M EU users, tighter obligations
03🇪🇺 ESC NextGen AI cardiology — genomics + clinical data at scale
 
04🇬🇧 Stargate UK — the IPO factor, grid queue, and Parliament responds
05🇭🇺 Hungary votes Sunday — AI deepfakes, bots, grey zone
Lead · United Kingdom · Platform Accountability · AI-Generated Abuse
01
The UK government announced Friday that senior technology executives could face prison sentences if their platforms fail to remove non-consensual intimate images following an Ofcom enforcement decision. Technology Secretary Liz Kendall tabled the amendment to the Crime and Policing Bill.
Sources: Reuters · LBC · CTV News · Morning Star · 10 April 2026

The UK government tabled an amendment to the Crime and Policing Bill on Friday that would make senior technology executives personally criminally liable if their platforms fail to comply with Ofcom’s enforcement decisions to remove non-consensual intimate images. Under the proposed law, executives without a reasonable excuse could face imprisonment, a fine, or both. The amendment will be debated in the House of Commons next week and represents the most direct escalation of individual executive accountability in British platform regulation since the Online Safety Act came into force.

Technology Secretary Liz Kendall framed the move explicitly in terms of AI-generated abuse. The amendment builds directly on the government’s January 2026 legislation criminalising the creation of non-consensual intimate images — a response to the surge of AI-generated deepfake sexual content on X’s Grok tool and across other platforms. In February, the government required platforms to remove reported non-consensual intimate images within 48 hours. Friday’s announcement goes further: it places the compliance obligation on named executives, not just the platform, with criminal consequences for failure.

Technology Secretary Liz Kendall · 10 April 2026

“Too many women have had their lives shattered by having their intimate images shared online without consent. This Government is uncompromising in our mission to protect women and girls online, and we have taken action to stop tech firms from publishing this abusive content. Protecting women and girls online is not optional, it is a responsibility that sits squarely with every tech company’s leadership.”

What The Amendment Does

Senior executives face personal criminal liability — imprisonment, fine, or both — if their platform fails to comply with an Ofcom enforcement decision to remove non-consensual intimate images. The amendment also includes plans to ban pornography depicting illegal sexual conduct involving family members or adults roleplaying as children. Ministers described the content as “revolting” and normalising of child sexual abuse. Both measures are tabled to the Crime and Policing Bill for Commons debate next week.

The AI Connection

This law is a direct product of AI-generated abuse. The January 2026 legislation criminalising creation of non-consensual intimate images was itself triggered by the Grok “put her in a bikini” trend on X. France has opened a separate investigation into X over Grok-generated deepfakes, calling the content “manifestly illegal.” The European Commission has also warned it is examining Grok’s “spicy mode.” Friday’s UK escalation — personal executive criminal liability — is the most aggressive regulatory response to AI-generated intimate abuse from any major European government to date.

Regulation · EU-wide · Digital Services Act · OpenAI / ChatGPT
02
OpenAI and ChatGPT are set to be classified as a Very Large Online Search Engine under the EU’s Digital Services Act, Germany’s Handelsblatt reported Friday, citing sources. At 120.4 million monthly EU users, ChatGPT is 2.7 times above the DSA threshold. The European Commission confirmed it is reviewing the data.
Sources: Reuters · Handelsblatt · Published Friday 10 April 2026

Germany’s Handelsblatt reported Friday, citing sources, that OpenAI and its ChatGPT chatbot are set to be classified as a Very Large Online Search Engine (VLOSE) under the European Union’s Digital Services Act. Reuters carried the wire. OpenAI declined to comment when contacted by Handelsblatt. A spokesperson for the European Commission told Handelsblatt that the available user data was being reviewed. OpenAI itself had reported that ChatGPT’s search feature reached an average of 120.4 million monthly EU users over the previous six months — a figure 2.7 times above the 45 million threshold that triggers VLOSE designation obligations.

VLOSE designation under the DSA would bring ChatGPT under the EU’s most demanding regulatory tier, a set of obligations that go significantly beyond what the AI Act alone requires. OpenAI would be required to conduct annual systemic risk assessments covering impacts on civic discourse, electoral processes, mental health, and the protection of minors. It would be required to provide transparency on how model outputs are moderated, give researchers access to data, and implement risk mitigation measures whenever it deploys a new functionality likely to have a critical impact on systemic risks. Unlike the AI Act, which focuses on the model itself, the DSA focuses on how it operates as a platform intermediary — the systemic role ChatGPT now plays in how hundreds of millions of people access and process information.

What VLOSE Designation Means in Practice

Annual risk assessments covering civic discourse, electoral processes, mental health, and minors

Output moderation transparency — how ChatGPT decisions are made and contested

Researcher data access — vetted academic access to platform data

Feature deployment triggers — systemic risk assessment required before launching new capabilities in EU

Fines up to 6% of global revenue for non-compliance — plus potential service suspension

AI Act + DSA overlap — dual regime compliance, more extensive than either law alone

Science · EU-wide · AI in Healthcare · Horizon Europe
03
The EU’s NextGen project — funded by Horizon Europe — published its latest progress report Friday on integrating genomic sequences, cardiac imaging, and clinical data into a single interoperable AI model for personalised cardiovascular medicine. Cardiovascular disease costs Europe €282 billion annually.
Source: European Society of Cardiology · Press release published 10 April 2026

The European Society of Cardiology published a press release on Friday confirming progress on the EU’s NextGen project — a Horizon Europe-funded initiative working to remove the data integration barriers that have prevented AI cardiovascular models from reaching their clinical potential. The core problem NextGen addresses is structural: health data across Europe exists in incompatible formats, governed by different national privacy frameworks, stored in systems that cannot communicate with each other. Genomic data is particularly complex — information-rich, individually identifiable, and held in formats that do not integrate with clinical records or imaging data. NextGen is building the “digital fabric” that allows all three data types to be combined securely and used to train the next generation of cardiovascular AI models.

The project involves a 21-member consortium including the European Society of Cardiology, Queen Mary University of London, the Earlham Institute, UMC Utrecht, and institutions from Germany, France, Finland, Italy, Switzerland, and the US. Its tools ensure health data remains meaningful and readable across different borders and hospital systems without losing clinical context, and allow researchers to discover relevant cardiovascular datasets without moving or exposing patient information. Governance is hard-coded into the data architecture itself. Five clinical pilot sites are running real-world demonstrations. The project is funded by €7.6 million from Horizon Europe plus additional grants from the Swiss State Secretariat for Education and UK Research and Innovation.

Prof. Steffen Petersen · Queen Mary University of London · ESC volunteer

“Clinicians rely on a wide range of clinical information to diagnose disease, predict risk, guide treatment and monitor outcomes. However, health data science has not yet fully captured the power of multimodal data such as symptoms, signs, electrocardiograms, blood tests, and imaging. Bringing these data together is crucial for advancing data-enabled innovation in healthcare, and NextGen represents a major step forward.”

Analysis · United Kingdom · AI Infrastructure · Follow-Day
04
Friday’s analysis of the Stargate UK pause added three dimensions Thursday’s announcement did not: the IPO factor, the 125GW grid queue that no planning policy fixes, and Parliament’s divided response. Bloomberg confirmed OpenAI is pre-IPO and tightening capital discipline.
Sources: TNW · Computer Weekly · Bloomberg · 10 April 2026

Bloomberg’s framing was the most important addition Friday brought: OpenAI is pausing Stargate UK as it reins in spending ahead of a public listing anticipated as early as Q4 2026. The company closed a $122 billion funding round at an $852 billion valuation in late March. Companies approaching an IPO tighten capital allocation, avoid open-ended international commitments, and reduce exposure to projects with uncertain timelines. The pause fits that pattern. The UK is not uniquely problematic — it is one of several locations where conditions do not justify a pre-IPO capital commitment.

TNW’s analysis surfaced the most concrete structural obstacle: UK grid connection requests surged from 41 gigawatts in November 2024 to 125 gigawatts by June 2025, with approximately 75 gigawatts attributable to data centre projects. Grid connections take three to eight years; data centres take 18 to 24 months. The Cobalt Park AI Growth Zone designation provides streamlined planning — but not a shorter grid queue. UK industrial electricity costs more than four times equivalent US locations. Both barriers are structural, not policy resolvable in the short term.

Bill McCluggage · Former UK Deputy Government CIO · Computer Weekly · 10 April 2026

“The stated concern about uncertainty around UK copyright rules and high energy costs are real enough. But they are unlikely to be the whole story. With an IPO on the horizon, it is hardly surprising that OpenAI is tightening its risk profile, especially against a backdrop of rising infrastructure costs, supply chain fragility in advanced chips, and questions about the pace of AI commercial returns.”

Election Watch · Hungary · AI Disinformation · Vote Sunday 12 April
05
Hungary votes Sunday 12 April. The campaign is Europe’s most documented test case for AI-generated political disinformation. Deepfake executions, AI news anchors, 10 million bot-amplified views — all independently verified. The EU AI Act’s transparency obligations are not yet enforceable.

Hungary votes on Sunday 12 April. Viktor Orbán’s Fidesz faces Péter Magyar’s TISZA party, with polls consistently showing Magyar at 58% to Fidesz’s 35%. The campaign has deployed AI-generated political content at a scale not previously documented in an EU member state election. NewsGuard confirmed a 34-account TikTok bot network — 22 accounts created within two days in January 2026 — generating approximately 10 million views of AI-generated videos boosting Orbán and discrediting Magyar. TikTok confirmed the accounts constitute a covert influence operation. EDMO fact-checkers documented AI-generated news anchor reports about Magyar with 100% AI-detection certainty. A deepfake execution video and a fabricated von der Leyen phone call were distributed by Fidesz. The Russian Matryoshka disinformation network is confirmed active on X and Telegram.

The Regulatory Grey Zone

The EU AI Act applies to Hungary — but its AI content labelling and transparency obligations do not fully come into force until 2 August 2026. Hungary has not designated a national authority to enforce the EU’s political advertising transparency regulation (TTPA), confirmed by a January 2026 FOI response from the Hungarian Ministry of Justice. In practical terms: AI-generated political content can be distributed with limited domestic legal consequence until August. This election is the AI Act’s most visible real-world test — and it is happening before the Act is fully armed.

Why It Matters Pan-Europeanly

Orbán has been the EU’s most reliable Russian-aligned voice, repeatedly vetoing Ukraine aid since 2022. A Magyar win represents the most significant shift in Hungarian foreign policy in fifteen years and a direct change in the EU’s internal balance on Russia, Ukraine funding, and rule-of-law enforcement. If confirmed AI disinformation influenced the result, it will be the strongest argument for accelerated EU AI Act enforcement that Brussels has yet encountered. Superintelligence Europe will report the AI dimension of the result in Monday’s Issue 018.

Signal · Friday 10 April · The Week’s Connecting Thread

Friday brought five stories that appear separate but read together as one: Europe is accelerating the enforcement of AI accountability across multiple dimensions simultaneously. The UK is threatening prison for platform executives who allow AI-generated intimate abuse to persist. The EU is reviewing whether to bring ChatGPT — now at 120 million EU monthly users — under its toughest regulatory tier. Hungary is holding an election in which AI-generated disinformation is documented and confirmed, and which will test whether the EU AI Act’s grey zone is politically tolerable after the results are known. The ESC published a Horizon Europe project on Friday showing what genuine AI-for-public-good looks like: federated, privacy-by-design, clinically grounded, and built by a 21-country consortium.

The week that started with the EU AI Continent’s one-year milestones ended with a UK company criminalising executive inaction on AI-generated abuse, an EU regulator reviewing whether to classify the world’s most used AI chatbot as a systemic platform risk, and an election in which AI disinformation infrastructure operated without domestic enforcement consequence. This is what the deployment phase of the AI governance cycle looks like.

The ones showing up in LLMs convert 3× better than Google

They optimized for LLMs, not just Google.

FAQs. Comparison pages. Transparent pricing. LinkedIn presence. These aren't vanity plays. They're what gets you cited in ChatGPT, Gemini, and Claude when your buyers are researching, your investors are looking, and your future hires are deciding where to work.

Download the free AEO Playbook for Startups from HubSpot and get the exact checklist. Five minutes to read.