On Our Radar
-
The European Commission awarded a €180 million tender to four European providers, strengthening EU digital sovereignty and public-sector cloud resilience.
-
The UK’s Sovereign AI Unit announced its first backing for homegrown AI firms in areas including drug discovery, supercomputing and AI infrastructure.
-
EU–Morocco Digital Dialogue launch establishes a strategic partnership on AI, digital infrastructure and digital public services cooperation.
-
The European Commission set out an action plan to strengthen the regional dimension of research and innovation across European cities and regions.
-
South Africa published its draft National AI Policy for public comment, with submissions invited until 10 June 2026.
-
The UK’s Smart Infrastructure Pilots Programme demonstrated how multipurpose “smart columns” can support 4G/5G, Wi-Fi, sensors, CCTV and local connectivity services.
-
Anthropic and NEC announced a strategic collaboration to deploy Claude across 30,000 NEC employees and develop secure, sector-specific AI tools for Japan.
-
China issued trial guidelines for AI ethics review, covering human wellbeing, fairness, trustworthiness, training data, bias and algorithmic exploitation.
-
The Rt Hon Dan Jarvis MP and The Rt Hon Liz Kendall MP warned business leaders that frontier AI could accelerate cyber threats and urged boards to strengthen cyber governance, Cyber Essentials and NCSC engagement.
-
The UK backed Ineffable Intelligence, a British frontier AI company developing self-learning systems aimed at generating new knowledge in science, medicine and engineering.
-
The European Commission hosted a high-level study visit on secure connectivity and digital infrastructure with policymakers from Egypt, Indonesia, Jordan, Kenya, the Philippines and Vietnam.
-
The EU Commission opened €63.2 million in Digital Europe calls to support AI innovation in health, digital skills, online safety and information integrity.
-
One year on, the EU’s AI Continent Action Plan reported progress on AI infrastructure, data, talent, adoption and trustworthy AI.
-
Scotland Women in Technology will fund AI leadership training for women in Scotland through two cohorts of The Data Lab’s programme.
-
OpenAI released Privacy Filter, an open-weight model designed to detect and redact personally identifiable information in text.
-
TechBuzz reported that Anthropic launched AnthroPAC, signalling the growing role of AI companies in shaping policy debates ahead of the US midterms.
-
Reuters reported that the UK is seeking to attract further Anthropic expansion as part of its broader AI capability and investment agenda.
-
The Rt Hon Liz Kendall MP argued that the UK must build greater national capability and leverage across AI, compute, chips and infrastructure to protect security and prosperity.
-
GitHub updated its Copilot policy so interaction data from Free, Pro and Pro+ users may be used to improve AI models unless users opt out.
-
A BBC-linked discussion highlighted growing scrutiny around AI data centres, infrastructure demand and the economic and environmental implications of rapid AI expansion.
-
OpenAI introduced GPT-5.5, describing it as its smartest and most intuitive model yet for complex professional, scientific and computer-based work.
-
Rest of World reported that AI optimism is significantly higher across parts of Asia than in the United States, with trust in government regulation also varying sharply.
-
BT and Nscale announced plans for NVIDIA-powered sovereign AI data centres in the UK to expand secure, locally controlled AI compute capacity.
-
Canada’s AI Sovereign Compute Infrastructure Program aims to expand domestic AI compute capacity while strengthening sovereignty, scalability and ecosystem access.
-
Nature explored proposals to place AI data hubs in space as terrestrial data centres face rising controversy over energy, land use and infrastructure demands.
-
“Stanford HAI – AI Index Report 2026” highlights a widening gap between rapidly advancing AI capabilities and society’s ability to govern and evaluate them.
-
“Advancing AI adoption in EU public administrations: Future directions and opportunities under the Apply AI Strategy” sets out a framework for people-centred and trustworthy AI adoption in the public sector.
-
“OpenAI – Industrial Policy for the Intelligence Age” argues for policies that keep human outcomes central in the transition to advanced AI systems.
-
CAIDP – Global AI Policy Index 2026 assesses national AI policies against democratic values, human rights and governance standards.
-
“Making Agentic AI Work for Government: A Readiness Framework” by WEF outlines how governments can deploy autonomous AI systems responsibly and effectively.
-
“An OECD Case Study: Scale AI Canada” highlights Canada’s AI cluster as a model for public–private collaboration and innovation scaling.
-
“Beware of Geeks Bearing Gifts” from The Future Society argues that Europe’s AI sovereignty ambitions are still fragmented and need stronger, more coordinated strategy.
-
“Infrastructure Foundations: From Current Assets to Future Growth” (World Bank) highlights growing global gaps in infrastructure, particularly in digital and data systems.
-
“Building a Human Resilience Infrastructure for the AI Age” calls for stronger societal systems to help people adapt to AI-driven disruption.
-
“Regulatory Sandboxes for Net-Zero Innovation” explores how experimental regulation can accelerate clean technology innovation and deployment.
-
“World Economic Forum – Technology Convergence Report 2026” shows how integrating multiple emerging technologies is becoming key to competitiveness and innovation.
-
OECD – “Forging New Frontiers in Mission-Oriented Innovation Policies” examines how governments can modernise mission-driven innovation to tackle complex societal challenges.
-
“The European approach to artificial intelligence policymaking” traces the evolution of EU AI governance combining regulation, investment and adoption.
-
“Generating Impact” (Accenture report) finds that the biggest barrier to AI value today is organisational readiness, not the technology itself.
-
“The UK’s Digital Sovereignty Opportunity” highlights how sovereign digital infrastructure could drive economic growth and resilience in the UK.
-
The report “The Impact of Artificial Intelligence on Privacy” explains that AI’s use of personal data creates risks like opacity, profiling, and bias, requiring GDPR-based safeguards and responsible design.
-
OECD – “Anticipating Skill Needs and Adapting Higher Education” explores how countries can align education systems with future labour market demands.
-
WEF – “Growth in the New Economy: Towards a Blueprint” identifies practical “no-regret” strategies for economic growth in an AI- and geopolitics-driven world.
-
The World Health Organization report, “Artificial intelligence and evidence-informed policy-emerging challenges and opportunities” highlights that while AI can strengthen evidence-based policymaking and healthcare, it raises risks around bias, inequality, privacy, and governance, requiring a strong human-rights-based and ethical framework.
Articles
-
“A hypothesis-driven responsible AI framework for interpretable ESG forecasting with RuleFit” proposes a model that combines explainable AI and statistical validation to generate transparent, fair and reliable ESG predictions for corporate decision-making.
-
“Technological capability and innovation network resilience: evidence from the AI industry in China” shows that firms’ underlying technological capabilities—not just their network position—are key to identifying core innovators and understanding the resilience of AI innovation ecosystems.
-
“Mapping AI startup investment and innovation in healthcare using a five-tier AI systems complexity framework” shows that AI investment is concentrated in high-complexity areas like diagnostics and drug discovery, while gaps remain in domains such as public and mental health, reflecting structural inequalities in data, funding and expertise.
-
“The significance of ethical awareness in human–AI interaction: the effects of two types of algorithmic literacy on users’ trust and sense of agency” finds that ethical awareness strengthens users’ trust and sense of control over AI, while purely technical knowledge can reduce both unless balanced by ethical understanding.
-
“A Framework for Integrating Data Governance and Predictive Analytics to Mitigate Legacy System Inefficiencies in Public Administration: The Case of Pension Bond Processing in Colombia” shows how combining data governance with machine learning can significantly improve efficiency in legacy public systems, reducing processing delays and identifying key administrative bottlenecks without requiring costly infrastructure replacement.
-
“Human-AI Governance (HAIG): A Trust-Utility Approach” introduces a framework that rethinks AI governance as a dynamic relationship between humans and AI, using trust and utility to guide how decision authority, autonomy and accountability should adapt across contexts.
-
“World Data Organization: Filling the Institutional Gap in Cross-Border Data Governance” argues that global data governance remains fragmented and proposes a World Data Organization to coordinate cross-border data flows as a shared, quasi-public good.
-
“Digital twin-based integrated modeling and maintenance framework in the power grid data center scenario” demonstrates how digital twin architectures can enable real-time monitoring, anomaly detection and root-cause analysis to improve maintenance and reliability in complex energy infrastructure systems.
Blogs
-
In AI News coverage of “GPT-5.5 is OpenAI’s most capable agentic AI model yet”, the model shows major gains in autonomy and performance, alongside significantly higher API costs.
-
“The Agentic State: Why Europe Must Act Now” argues that Europe must urgently strengthen its governance and infrastructure strategy to remain competitive in the AI era.
-
“A board-level playbook for governing agentic AI” (WEF) provides practical guidance for leadership teams overseeing autonomous AI systems.
-
An OECD.AI blog, “Designing transparency for government AI” highlights how structured reporting improves accountability and public trust in government algorithms.
-
“Maintaining epistemic integrity in the era of answer engines” (ODI blog) warns that AI-generated responses risk undermining knowledge reliability without stronger governance.
-
A World Economic Forum article, “What technology convergence looks like in practice,” shows how combining AI with other emerging technologies is transforming real-world systems.
-
In a GovInsider article, “From fragmentation to trust: why Singapore’s next phase of digital government depends on clarity and consistency” clarity and consistency emerge as key to building trust in digital public services.
-
“Nothing is 100% human-authored,” published on the LSE Impact Blog, examines how AI is reshaping authorship and challenging traditional notions of originality.
-
Another LSE Impact Blog piece, “Before AI agents act for us, we need to know how AI searches for us,” stresses the importance of understanding AI information retrieval before delegating decisions.
-
“Agentic AI’s governance challenges under the EU AI Act in 2026” highlights regulatory gaps as autonomous systems test the limits of current frameworks.
-
In Anthropic’s research blog, “Assessing Claude Mythos Preview’s cybersecurity capabilities” explores how AI systems construct and interpret narratives and shared knowledge.
-
“The trust dividend: Why connected data makes AI decision-ready for sustainability” emphasises the importance of high-quality, interoperable data for AI-driven decisions.
-
“Building public legitimacy for digital ID in the UK” by Ada Lovelace Institute, emphasises that trust, transparency and public engagement are essential for digital identity systems.
-
A Rest of World article, “Why AI alone cannot fix social problems,” shows that human expertise remains central to meaningful AI-driven impact.
-
In “Saudi Arabia’s new sustainability AI powered platform”, a WEF article, national AI systems are shown supporting environmental monitoring and sustainability goals.
-
New open call for proposals under the Digital Europe Programme, EU Commission
-
Cyber Security Researcher, AISI, UK Department for Science, Innovation & Technology
-
Sovereign AI Strategic Assets Grants Programme, The Sovereign AI Fund
-
Research Scientist, AISI, UK Department for Science, Innovation & Technology
-
British Academy/Leverhulme Small Research Grants,The British Academy
-
OpenAI Safety Fellowship, Open AI
-
Wellcome Career Development Awards, The Wellcome Trust
