From Trust to Co-Governance: New Frameworks Emerge for AI in the Public Sector

Jul 22, 2025

Over the past two weeks, Dr Zeynep Engin, Founding Director of Data for Policy and Editor-in-Chief of the Data & Policy journal, delivered two thought-provoking presentations at key international events CampDigital 2025 in Manchester, UK, and the International Summer School on Digital Inclusion and Generative AI in Donostia / San Sebastián, Spain. Her contributions highlighted urgent challenges in the governance of artificial intelligence and offered pioneering frameworks for ensuring democratic oversight and accountability in an increasingly automated world.

Reframing AI Oversight at CampDigital 2025: “The Confidence Game”

Held at the Royal Northern College of Music, CampDigital 2025 brought together technology leaders, researchers, and policymakers to explore ethical and inclusive uses of digital technologies. Dr Engin’s keynote talk, “The Confidence Game: Designing Trustworthy Human-AI Collaborations,” challenged conventional notions of accountability and governance in the age of advanced AI systems.

As AI becomes deeply embedded in public decision-making processes, traditional oversight mechanisms designed for static tools and clear lines of responsibility—no longer suffice. Dr Engin explained that AI systems today operate with increasing autonomy, blurring the lines between decision support and decision-making. This shift results in governance gaps, especially within public sector contexts, where transparency, trust, and legitimacy are non-negotiable.

To address this, she presented the Human-AI Governance (HAIG) framework, which introduces three critical governance paradoxes:

  1. The Accountability-Capability Paradox: Humans remain responsible for AI-driven outcomes they may not fully understand.
  2. The Recursion Trap: AI systems are increasingly managing or interacting with other AI systems, creating oversight blind spots.
  3. The Democratic Deficit: Decisions with societal impact are being made without public input or accountability.

Her talk proposed a new governance architecture that includes institutional innovations such as AI Audit Courts, Hybrid Oversight Bodies, and Algorithmic Juries mechanisms designed to ensure both speed and scrutiny in AI implementation. You can view the conference slides here.

Symbiotic Public Systems at UIK 2025: Beyond Human vs. Machine

From Manchester to the Basque coast, Dr Engin continued her thought leadership at the UIK 2025 International Summer School on Digital Inclusion and Generative AI, held at the historic Miramar Palace in Donostia / San Sebastián on 15–16 July. The hybrid-format event brought together over 125 participants from academia, civil society, and government to explore inclusive and ethical digital transformation.

Her presentation, titled “Symbiotic Public Systems: When Neither Humans nor AI Can Govern Alone,” built on her HAIG model and introduced a more comprehensive framework for AI-human co-governance. The concept of Symbiotic Public Systems (SPS) reflects the reality that modern governance cannot rely solely on either human judgment or artificial automation. Instead, it requires collaborative, dynamic, and mutually reinforcing systems.

The SPS framework integrates:

  • HAIG Infrastructure: Mapping the evolving relationship between humans and AI through trust thresholds and relational accountability.
  • Algorithmic State Architecture (ASA): A technical blueprint that connects digital public infrastructure, data-for-policy systems, algorithmic governance protocols, and GovTech service delivery.

Using ChatGPT and similar generative AI systems as case studies, Dr Engin illustrated how billions of interactions currently take place without robust oversight. She emphasised that democratic legitimacy must be preserved as AI capabilities continue to scale. Curious to dive deeper? Check out the presentation slides here.

Toward Human-Centred, Democratically Aligned AI

Both events underscore a growing consensus: the future of public sector AI depends not only on technological capacity but also on institutional innovation. Frameworks like HAIG and SPS provide concrete tools to navigate the governance paradoxes that AI introduces.

As governments, researchers, and civic institutions grapple with integrating AI into public life, these models offer a vision of co-governance where trust, inclusion, and accountability are not afterthoughts, but foundational design principles.