On Our Radar
-
UKRI has published its first AI Research and Innovation Strategic Framework, outlining how it will back excellent research, build skills, and mobilise partnerships to ensure AI development is safe, responsible, and impactful for society and the economy.
-
The India AI Impact Summit 2026 united 100+ countries to advance ethical, inclusive, and trusted AI, launching global initiatives, workforce development programmes, and India’s M.A.N.A.V. AI vision for responsible innovation.
-
Google’s America‑India Connect initiative will expand digital infrastructure by establishing fibre‑optic routes and connectivity across four continents, enhancing reliability and resilience for AI and cloud services.
-
India has joined the Pax Silica initiative and signed a Joint Statement on the India‑U.S. AI Opportunity Partnership, deepening collaboration on secure technology supply chains, AI infrastructure, and critical technology cooperation with the United States and partners.
-
OpenAI and Microsoft have joined the UK’s international coalition on AI safety, pledging substantial funding to the AI Security Institute’s Alignment Project to ensure advanced AI systems are safe, secure, and reliable.
-
The European Centre for Democratic Resilience has started work to safeguard democracy, combat disinformation, and coordinate EU-wide efforts against emerging threats.
-
The UK government announced it will champion how AI can drive growth, create jobs, and improve public services at the AI Impact Summit in India, reinforcing its commitment to AI’s societal and economic benefits.
-
Barnsley has been designated the UK’s first government‑backed tech town, highlighting support for innovation, digital skills, and economic growth through technology.
-
Google DeepMind CEO Sir Demis Hassabis urged urgent research on AI threats and smart regulation at the India AI Impact Summit 2026, while the US rejected global AI governance in favor of innovation-first approaches.
-
The UK and Japan have strengthened science and technology ties, reinforcing collaboration on research, innovation, and technology development between the two nations.
-
The EU invested €700 million in NanoIC, Europe’s largest Chips Act pilot line at IMEC Leuven, to advance sub-two-nanometre semiconductor technology for AI and 6G applications.
-
California’s state Senate passed a bill regulating lawyers’ use of AI to ensure ethical and responsible practices in legal services.
-
Mexico’s SECIHTI unveiled 10 principles for ethical AI deployment to guide responsible AI use across sectors in the country.
-
King’s College London highlighted its AI research and education at the India AI Impact Summit, showcasing ethical and inclusive AI solutions to global leaders.
-
The Tallinn Mechanism launched its online platform to connect private sector cybersecurity companies with Ukraine’s civilian cyber defence projects, supported by 14 countries and ESTDEV funding.
-
The UK is investing £150 million in research to accelerate cancer diagnosis, expand tidal energy testing, and develop advanced materials for healthcare and clean energy applications.
-
Singapore AI Council, announces the creation of a council to guide Singapore’s national AI strategy and ecosystem growth.
-
Anthropic’s Responsible Scaling Policy v3 strengthens safeguards for advanced AI, introducing risk-based safety levels, public roadmaps, and external-reviewed reports to promote transparency and responsible development.
-
An AI model from the University of Hertfordshire helps NHS managers forecast and optimize healthcare resources.
-
The UK government and industry are expanding free AI training, aiming to equip 10 million workers with key AI skills by 2030 to support workforce readiness for emerging technologies.
-
The U.S. Pentagon has threatened action in a dispute with Anthropic over AI safeguards, underlining tensions on safety standards between government and AI developers.
-
Canada’s DGSI‑118 standard was reaffirmed as a national benchmark for cybersecurity and cyber resiliency in healthcare, emphasising robust protection for health data and systems.
-
The UK government is speeding up cyber fixes, cutting critical backlogs, and creating a Cyber Profession to protect public services.
-
OpenAI announced initiatives to advance independent AI alignment research, supporting robust and trustworthy AI development.
- International AI Safety Report 2026, highlights global developments in AI safety and strategies for mitigating risks in advanced AI systems.
-
UK AI Opportunities Action Plan: One Year On, reviews the progress of the UK’s national AI initiatives, policy updates, and sector engagement.
-
OECD Due Diligence Guidance for Responsible AI, provides guidelines for ethical AI deployment and corporate due diligence practices.
-
WEF white paper explores how AI and geopolitics are reshaping telecoms, highlighting three strategic paths – modernising networks, developing AI capabilities, or partnering with governments to unlock value and leadership in the digital economy.
-
USCM AI Roadmap Report 2026, outlines U.S. municipal strategies for integrating AI in city governance and public services.
-
OECD Agentic AI Landscape Report, explores the conceptual foundations and ecosystem of agentic AI systems.
-
Anthropic Education Report: The AI Fluency Index measures how people use AI by tracking observable behaviours in thousands of real Claude.ai conversations, showing that fluency is highest when users refine and iterate with the model and laying a baseline for how humans collaborate with AI effectively, ethically, and safely.
-
Anthropic India Brief Economic Index, analyses the economic impact of AI adoption in India and emerging opportunities.
-
OECD AI Observatory Index 2026, benchmarks national AI capabilities, policies, and regulatory readiness worldwide.
-
This CETaS report investigates AI information threats during crises and provides recommendations to ensure AI strengthens democratic resilience rather than fueling instability.
-
World Bank AI in Development Report, examines AI applications in global development and implications for policy and governance.
-
Progress in Implementing the EU Coordinated Plan on AI (Volume 2), tracks the implementation of EU member state AI strategies and collaborative initiatives.
-
Anthropic Measuring Agent Autonomy, explores frameworks for assessing the autonomy and decision-making capabilities of AI agents.
-
OECD National Strategies for Immersive Technologies, reviews global approaches to AR/VR adoption and policy frameworks for immersive tech.
Articles
- “GPT as a Measurement Tool” (OpenAI Research), explores how GPT models can be deployed as general‑purpose measurement tools for complex tasks, demonstrating their potential to quantify nuanced phenomena and assess performance across varied real‑world scenarios.
- “The CitizenQuery Benchmark: A Novel Dataset and Evaluation Pipeline for Measuring LLM Performance in Citizen Query Tasks”, introduces a large benchmark of real‑world citizen questions to evaluate how well large language models provide context‑aware, accurate responses on public info tasks.
- “Synthesizing Scientific Literature with Retrieval‑Augmented Language Models”, shows that retrieval‑augmented models like OpenScholar can synthesise and cite scientific research with accuracy approaching human experts, helping automate literature review tasks at scale.
- “Rule-of-law Digital Government and CSR Governance in Digital Firms: Evidence from China”, finds that government–firm collaboration in digital governance can create a “Digital Leviathan” that undermines corporate social responsibility, while rule-of-law digital governance effectively mitigates these risks, highlighting the importance of institutional oversight and process-oriented regulation in China’s digital economy.
- “Agentic AI for Autonomous Preventive Maintenance Policy Governance”, demonstrates how Agentic AI can autonomously manage preventive maintenance policies in industrial environments, using a multi-agent framework that optimises maintenance schedules while providing explainable, transparent, and auditable decisions for stakeholders.
- “Things Fall Apart”: The Unravelling of Global Health Governance and the Imperative for Action Preserving Infectious Disease Prevention and Control”, warns that decades of progress in global health are at risk due to fragmented governance, underfunding, declining trust, and geopolitical tensions, emphasising that sustaining infectious disease prevention and control requires long-term investment, multilateral cooperation, and rebuilding public confidence in science and institutions.
- “The Adoption and Efficacy of Large Language Models in US Consumer Financial Complaints”, finds that LLM assistance significantly increases the likelihood of obtaining favourable outcomes in financial complaint resolutions by making submissions clearer and more persuasive
- Anthropic’s study, “Measuring AI agent autonomy in practice” shows AI agents are increasingly acting autonomously in real-world tasks, highlighting the need for careful oversight and monitoring.
Blogs
- A Carnegie Europe analysis finds that Europe can manage AI-driven labor market transitions by combining workforce re-skilling, policy innovation, and economic resilience planning.
-
A World Economic Forum piece highlights how designing AI systems for trust, transparency, and accountability is critical as autonomous agents increasingly influence business and governance.
-
This LSE Impact blog piece argues that the UK’s AI training ambitions require strategic investment beyond simple course listings to build deep expertise across sectors.
-
Artificial Intelligence News reports that over 175,000 unprotected Chinese AI systems pose growing global cybersecurity risks, underscoring the need for international oversight and safeguards.
-
A GovInsider analysis shows that Canada is advancing a whole-of-government AI strategy to modernize legacy systems and improve public service efficiency.
-
The World Economic Forum blog post warns that quantum technologies pose emerging security risks that policymakers must address to safeguard critical infrastructure.
-
A Rest of World investigation examines how global governments are regulating AI tech giants, revealing tensions between innovation, accountability, and public interest.
-
The OECD highlights how the Global South can shape AI development through practical policy, local innovation, and inclusive governance frameworks.
-
This blog piece reports that Ukraine is building AI-enabled government services using low-code platforms, enabling rapid experimentation and improved public service delivery.
-
An LSE blog finds that AI tools can support the assessment of research environments, enhancing evaluation efficiency and fairness in higher education.
-
GovInsider notes that Asia-Pacific nations can leverage AI strategies to strengthen sovereignty and strategic influence while balancing innovation and regulation.
-
The World Economic Forum also notes that cybersecurity is evolving toward proactive resilience, emphasising anticipation, adaptation, and operational continuity in the digital age.
-
Research Scientist, Open Source Technical Safeguards, AI Security Institute
-
Director General for Emerging Technology and Artificial Intelligence, Department for Science, Innovation & Technology
-
Agentic AI Risk Modelling and Mitigations, AI Security Institute
-
Wellcome Career Development Awards, The Wellcome Trust
-
Head of AI (Regulatory Policy and Supervision), Information Commissioner’s Office
-
Establishing infrastructure hubs to power evidence synthesis in low- and middle-income countries, The Wellcome Trust
