PULSE – February 2026 Edition Live

Mar 4, 2026

February 2026 underscored AI’s expanding role in shaping national strategy, economic competitiveness, and global influence. Across the UK, India, Europe, and beyond, governments advanced new frameworks to ensure AI development is safe, inclusive, and aligned with broader societal goals while industry moved swiftly to match geopolitical priorities with concrete investments and coalitions.

In the UK, the month marked a series of significant policy milestones. UKRI published its first AI Research and Innovation Strategic Framework, setting out how it will support excellent research, build skills, and mobilise partnerships to ensure AI development is safe, responsible, and impactful. The government also announced ambitious plans to expand free AI training, with a target of equipping ten million workers with key AI skills by 2030 — a move framed as essential to workforce readiness in an era of rapid technological change. Complementing these efforts, the UK accelerated its cyber defences by speeding up critical fixes, cutting existing backlogs, and establishing a dedicated Cyber Profession to protect public services. In a signal of its commitment to regional economic development through technology, the government also designated Barnsley as the UK’s first government-backed tech town, backing innovation, digital skills, and growth beyond major urban centres.

At the India AI Impact Summit 2026, the global dimension of AI governance came into sharp focus. More than 100 countries gathered to advance a shared vision of ethical, inclusive, and trusted AI, launching a range of global initiatives centred on workforce development and equitable access to emerging technologies. India presented its M.A.N.A.V. AI vision, articulating an ambitious national agenda for responsible innovation and international collaboration. The Summit also served as a diplomatic platform: India deepened its strategic ties with the United States through the AI Opportunity Partnership and joined the Pax Silica initiative, coordinating secure AI infrastructure and critical technology supply chains with the US and partner nations. Google’s America–India Connect initiative further reinforced this infrastructure agenda, with plans to establish fibre-optic routes spanning four continents to enhance reliability and resilience for AI and cloud services.

Industry moves in February closely mirrored these geopolitical priorities. OpenAI and Microsoft joined the UK’s international coalition on AI safety, pledging substantial funding to the AI Security Institute’s Alignment Project to ensure advanced AI systems are safe, secure, and reliable. Anthropic released its Responsible Scaling Policy v3, strengthening safeguards for advanced AI through risk-based safety levels, public roadmaps, and externally reviewed reports designed to promote transparency and accountability. Meanwhile, an AI model developed at the University of Hertfordshire began helping NHS managers forecast and optimise healthcare resources — a grounded example of AI generating measurable public value.

Across Europe, regulatory and investment activity advanced on several fronts. The EU committed €700 million to NanoIC, Europe’s largest Chips Act pilot line at IMEC Leuven, to develop sub-two-nanometre semiconductor technology with applications in AI and 6G. The European Centre for Democratic Resilience began operations to safeguard democracy, combat disinformation, and coordinate EU-wide efforts against emerging threats. Elsewhere, California’s state Senate passed legislation regulating lawyers’ use of AI to ensure ethical and responsible practices in legal services, while Mexico’s SECIHTI unveiled ten principles for ethical AI deployment to guide responsible use across sectors.

Global debates around safety and transparency also intensified. The International AI Safety Report 2026 highlighted key developments in AI safety and strategies for mitigating risks from advanced systems. The OECD published its Due Diligence Guidance for Responsible AI, offering authoritative frameworks for ethical deployment, corporate oversight, and responsible adoption. Google DeepMind CEO Sir Demis Hassabis used his platform at the India AI Impact Summit to call for urgent research into AI threats and smart regulation — even as the US signalled a preference for innovation-first approaches over binding global governance. Tensions between government and AI developers were further illustrated when the US Pentagon entered into a public dispute with Anthropic over AI safeguards.

Innovation in education and research continued alongside these policy developments. Dr Zeynep Engin as the Founding Director and Chair of Data for Policy CIC, participated in the Horizon Hub Workshop Series at Hamad Bin Khalifa University in Doha, co-leading a workshop on bridging research and policy practice. The first Data for Policy Fireside Chat of 2026 explored the safe deployment of frontier AI in financial services, with researchers from the University of Warwick addressing risks including hallucinations, prompt sensitivity, and regulatory alignment, underscoring the importance of human oversight and evidence-based approaches in critical infrastructure.

February’s developments reflect a deepening maturity in how governments and institutions approach AI. The month moved well beyond high-level principles, with concrete frameworks, funding commitments, workforce programmes, and international partnerships taking shape across every major region. Whether through national AI strategies, sovereign infrastructure investment, or multilateral safety coalitions, the message was consistent: responsible AI adoption demands not just ambition, but institutional readiness, transparency, and sustained public trust.

To read the full February 2026 newsletter, click here.

Stay informed on the latest developments and insights in data and AI policy, subscribe to our newsletter here.