EU officials call it the “Brussels effect”: the idea that the 27-nation bloc can become the de facto global rule setter in a given area by regulating before anyone else. EU regulation has made useful contributions in safeguarding data privacy and tech-sector competition. But with its 2024 AI Act it over-reached itself. The rules provoked fierce lobbying by Big Tech companies and the US government. More importantly, they jeopardise the competitiveness of EU companies and start-ups, and risk making Europe a permanent also-ran after the US and China in the race to develop and harness the transformative technology.
AI is too consequential to leave unregulated. But a proper balance must be struck between restrictive rules and the freedom to pursue innovative technologies. The EU’s AI Act overestimated some risks and was too focused on reining in the general-purpose technology that broke through just at the end of the regulatory process.
The most powerful foundation models such as OpenAI’s GPT-5 or Google’s Gemini — large-scale systems trained on vast datasets that can be adapted for multiple applications — do merit proper external scrutiny given their systemic risks. The European legislation puts special obligations on models above a certain threshold. But it also places burdens on all providers of foundation or general-purpose AI (GPAI) models.