April 2026 reflected a rapidly evolving landscape where AI, digital sovereignty, and infrastructure took centre stage across policy, industry, and research. From the European Commission’s major investments in sovereign cloud to the UK’s strengthening national AI capability agenda, the month underscored a growing convergence between public governance and private innovation — and the urgency of ensuring the two move in step.
In the UK, a series of significant developments signalled the government’s determination to build national AI capacity while managing the risks that come with it. The UK’s Sovereign AI Unit announced its first backing for homegrown AI firms in areas including drug discovery, supercomputing, and AI infrastructure — a concrete step toward the kind of sovereign capability that Rt Hon Liz Kendall MP argued is essential to protect the nation’s security and prosperity. At the same time, Rt Hon Dan Jarvis MP and Rt Hon Liz Kendall MP warned business leaders that frontier AI could accelerate cyber threats, urging boards to strengthen cyber governance and deepen engagement with the NCSC. The Smart Infrastructure Pilots Programme, meanwhile, demonstrated how multipurpose smart columns can support 4G/5G, Wi-Fi, sensors, and local connectivity services — a reminder that AI ambition rests on physical infrastructure as much as policy frameworks.
Across Europe, regulatory and investment activity advanced on multiple fronts. The European Commission awarded a €180 million tender to four European providers to strengthen EU digital sovereignty and public-sector cloud resilience, and opened €63.2 million in Digital Europe calls supporting AI innovation in health, digital skills, online safety, and information integrity. The EU–Morocco Digital Dialogue launched a new strategic partnership on AI and digital infrastructure, and one year on from its publication, the EU’s AI Continent Action Plan reported progress on infrastructure, data, talent, adoption, and trustworthy AI. South Africa, too, entered this expanding global policy conversation, publishing its draft National AI Policy for public comment with submissions open until 10 June 2026.
Industry moves in April closely mirrored these geopolitical priorities. Anthropic announced a strategic collaboration with NEC to deploy Claude across 30,000 employees and develop secure, sector-specific AI tools for Japan, and launched Project Glasswing — a major industry initiative aimed at securing critical software and open-source infrastructure. The company also launched AnthroPAC, marking the growing role of AI firms in shaping policy debates ahead of the US midterms. OpenAI, meanwhile, released GPT-5.5, described as its most capable model yet for complex professional and scientific work, and introduced Privacy Filter, an open-weight model for detecting and redacting personally identifiable information in text. BT and Nscale announced plans for NVIDIA-powered sovereign AI data centres in the UK, and Canada’s Sovereign Compute Infrastructure Program set out to expand domestic AI compute capacity while strengthening sovereignty and ecosystem access.
Global debates around governance and accountability intensified across the month. The Stanford HAI AI Index Report 2026 highlighted a widening gap between rapidly advancing AI capabilities and society’s ability to govern and evaluate them — a finding echoed by the CAIDP Global AI Policy Index 2026, which assessed national AI policies against democratic values, human rights, and governance standards. The World Economic Forum’s Technology Convergence Report 2026 showed how integrating multiple emerging technologies is becoming key to competitiveness and innovation, while the World Health Organization cautioned that while AI can strengthen evidence-based policymaking and healthcare, it raises serious risks around bias, inequality, privacy, and governance that demand a strong human-rights-based framework. Across these reports, a consistent message emerged: the biggest barrier to responsible AI adoption is not the technology itself, but organisational and institutional readiness.
For our community, April’s most significant milestone was the official launch of The Digital Statecraft Academy (DSA), which Data for Policy CIC is proud to support. From 19–25 April 2026, the DSA convened its inaugural Cambridge Fellowship Residency at Jesus College Cambridge, bringing together 16 Fellows from around the world to explore how to govern effectively in an AI-driven world. The week combined practice-oriented work on digital public infrastructure, data governance, and AI with engagements at Microsoft Research and The Alan Turing Institute, culminating in a launch reception hosted by the British Academy. In her keynote, Rt Hon Liz Kendall MP described the DSA as “an important and timely initiative,” underscoring the need for new approaches to governance in a rapidly evolving technological landscape. Dr Stefaan Verhulst, Co-Editor in Chief of Data & Policy and Founding Advisor of the DSA, has since reflected on the Cambridge residency in the blog post, “Signals from the Frontier of Digital Statecraft: Rethinking Governance in the Age of AI.“
April’s developments reflect the deepening maturity — and deepening complexity — with which governments, institutions, and industry are approaching AI. Concrete frameworks, funding commitments, infrastructure investments, and international partnerships continued to take shape across every major region. Yet the recurring theme across policy, research, and practice was the same: responsible AI adoption demands not just ambition, but institutional readiness, transparency, and sustained public trust. The work of the DSA, and of this community, sits squarely at the heart of that challenge.
To read the full April 2026 newsletter, click here.
Stay informed on the latest developments and insights in data and AI policy, subscribe to our newsletter here.

