Europe’s AI moment: Between regulation and global relevance
Tuesday, 26 August 2025

The European Union has become the first major global power to establish a comprehensive legal framework for artificial intelligence. But can Europe move from being a regulatory pioneer to a global AI leader?
In the latest episode of Future is Blue, I spoke with Andrea Renda, Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy at the Centre for European Policy Studies (CEPS) in Brussels. Our conversation explored whether the EU’s approach to AI can stand up to the challenges of rapid innovation, geopolitical fragmentation, and a widening global AI race.
[You can access here the podcast conversation].Building trust and excellence in AI
The AI Act may be a landmark, but Renda cautioned against overestimating its immediate impact. “It’s a very difficult piece of legislation that still hasn’t been fully implemented,” he explained. “While the EU was pursuing this goal, many other countries were giving up on regulating AI.”
The Act’s long gestation period—spanning six to seven years during which AI evolved significantly, forced lawmakers to rewrite entire sections. The inclusion of general-purpose AI models like ChatGPT came only after intense debate. “Replicating [the AI Act] in other parts of the world is difficult,” said Renda. Much has been made of Europe’s intent to lead with values, but Renda was quick to clarify that the AI Act’s goal is not technological independence from the U.S. “The AI Act tries to build an ecosystem of trust,” he said. “But for an ecosystem of excellence, you need much more: investment, infrastructure, access to data, cloud services, connectivity, and skills.”
The idea of complete technological sovereignty is unachievable, both technologically and economically. Instead, he argued, Europe should focus on areas where it is critically dependent—such as cloud infrastructure or AI model development, where U.S. companies dominate the market.
“Over 80% of our cloud services come from U.S.-based companies, and around 95% of the AI models we use are developed in the U.S. That wouldn’t be a problem if the U.S. remained a friendly ally,” he warned, referencing Elon Musk’s recent comments about potentially disabling Starlink in Europe as a reminder of such vulnerabilities.
Does regulation kill innovation?
One of the most persistent critiques of the EU’s regulatory approach is its supposed dampening effect on innovation. Renda disagrees. “Regulation is not an innovation killer—when it’s properly written,” he argued. “Tech giants and startups alike aren’t asking for less regulation. They’re asking for better regulation.”
Still, the road ahead won’t be easy. Because AI affects virtually every sector—health, transport, education, democratic processes—regulators will need to work closely with domain-specific experts.
Renda sees one unexpected factor driving global interest in the EU model: changes in US regulatory approach. “The best hope we have for the Brussels Effect right now is Donald Trump... There’s so much deregulation in the U.S. that many companies and investors are looking to the EU as a more stable, trustworthy environment for AI development.”
A call for a new digital social contract
Beyond legislation, Renda believes Europe needs a broader societal response. “What we need is a renewed digital social contract,” he said. “We need more accountable corporations, smarter governments, an empowered civil society, and more digitally literate citizens.”
Civil society should have the tools to scrutinize algorithmic systems, and regulators should use AI themselves to enhance oversight.
When asked about the biggest societal risks AI poses, Renda didn’t hesitate. “The number one risk is disinformation,” he said. “We’ve already seen Elon Musk promote parties in Germany that want to dismantle the EU.”
But the labour market, too, is at risk. “There’s a danger that companies will automate entry-level tasks with generative AI, leaving a gap in the pipeline of experienced workers,” he warned. “We need to make sure AI is used in ways that complement—not replace—humans.”
He also took aim at what economists call “so-so automation”—when automation reduces both costs and quality. “Think of chatbots replacing human service agents, or automated court rulings processed in five minutes instead of five years. It’s fast, but at what cost to our rights and the quality of outcomes?”
The bright side: AI for science and sustainability
There is tremendous potential in AI—particularly in science. “AI has already helped us understand protein folding,” he said. “It can help us unlock mysteries in physics, neuroscience, and beyond.”
Yet current investment trends are worrying. “Less than 1% of venture capital in AI has gone into sectors like energy, where it’s badly needed,” he said. “Most of the money is chasing speculative AGI [artificial general intelligence] or defense applications. We need to redirect funding toward solving real problems.”
To shift course, Renda proposes a large-scale EU initiative focusing on trustworthy AI for government, industry, and science. “The idea was picked up by the European Commission in its Competitiveness Compass,” he noted. “We need a moonshot that delivers not just excellence, but leadership rooted in democratic values.”
As global AI governance fragments and tech giants race ahead, the EU faces the task of balancing regulation with innovation capacity. That means combining regulation with real investment, political will, and a coherent vision.
[You can access here the podcast conversation].Carlos Carnicero Urabayen