February’25 Fireside Chat
Event Summary
On 26 February 2025, we hosted the fourth session of our monthly Fireside Chat series, featuring Professor Lilian Edwards, a renowned expert in Law, Innovation, and Society at Newcastle University, with a focus on internet law and AI regulation. The session was chaired by Professor Jon Crowcroft, Marconi Professor of Communications Systems at the University of Cambridge and researcher-at-large at the Alan Turing Institute. The discussion explored key challenges, including the implications of the newly enacted EU AI Act, geopolitical competition in AI regulation, the technical and societal impacts of affordable AI systems, and the tension between innovation, safety, and public interest in AI-driven governance.
Discussion Highlights
- General Purpose AI and Geopolitical Dynamics: The rise of general-purpose AI (e.g., ChatGPT) forced a rushed addition to the Act, with a code of practice due by November 2025. Prof. Crowcroft noted recent cost reductions in training large models (e.g., DeepSeek), shifting innovation from the US to China and others, while Prof. Edwards critiqued the UK’s post-Brexit divergence from EU alignment.
- EU AI Act: Structure and Initial Impacts: The EU AI Act, effective as of February 2025, was introduced as a risk-based regulation with immediate requirements like AI literacy for providers and deployers, and bans on prohibited practices (e.g., manipulative AI systems). Prof. Edwards highlighted its “triangle” structure—prohibited, high-risk, and lower-risk AI—contrasting its vagueness with the concrete GDPR and DMA, while noting its reliance on future technical standards.
- High-Risk AI Regulation and Implementation: High-risk AI systems (e.g., automated hiring) face phased regulation through 2025-2026, requiring best practices like unbiased training data. Profs Edwards and Crowcroft discussed challenges in self-certification against yet-to-be-defined standards, emphasizing the Act’s exclusion of widely used systems like search and recommender algorithms
- AI Safety, Security, and Hybrid Warfare: The UK’s AI Safety Institute has pivoted to national security, driven by hybrid warfare threats enabled by cheap AI (e.g., drones and misinformation campaigns). Prof. Crowcroft contrasted explainable AI for everyday use (e.g., medical imaging) with opaque large models, urging a focus on practical harms over existential risks.
- Public Interest and Global Perspectives: Responding to Dr. Zeynep Engin, Lilian Edwards questioned whether major states prioritize AI for public interest, citing economic and control motives, though she acknowledged civil society efforts and healthcare AI successes. The lack of European cloud infrastructure was flagged as a sovereignty concern.
- Uncertainty and Future Directions: The discussion highlighted the chaotic interplay of technology, geopolitics (e.g., Trump’s influence), and regulation.
We look forward to the next session in our Fireside Chat series, where we will continue to explore important topics in our field.