We are pleased to share two recent preprints from the leadership of Data for Policy CIC, offering timely and forward-thinking contributions to the evolving field of AI governance.
The first is a co-authored perspective article by Dr Zeynep Engin, Founding Director of Data for Policy CIC ; University College London, and Professor David Hand, Director at Data for Policy CIC and Emeritus Professor of Mathematics at Imperial College London. The second is a solo-authored research paper by Dr Engin, expanding the theoretical foundations for trust-based, adaptive oversight of AI systems.
Together, these two papers present a cohesive vision for governing increasingly autonomous and context-sensitive AI systems. By introducing novel frameworks rooted in responsiveness, trust dynamics, and continuous negotiation between human and machine roles, they offer practical guidance for developing future-ready regulatory strategies.
Toward Adaptive Categories: Dimensional Governance for Agentic AI
In this perspective article, Dr Engin and Prof Hand propose a new approach to AI oversight called dimensional governance. This model addresses the limitations of traditional regulatory categories such as fixed risk tiers or automation levels which are increasingly insufficient for overseeing emerging forms of agentic AI that exhibit goal-setting, reasoning, and adaptive behavior.
The authors introduce a three-dimensional framework based on Decision Authority, Process Autonomy, and Accountability (referred to as the “3As”). By continuously tracking how these elements shift within a human-AI system, regulators can move beyond static classifications and instead define adaptive categories that evolve with system behavior.
This approach also introduces the concept of critical trust thresholds points at which changes in a system’s behavior require a corresponding shift in oversight. The dimensional governance framework offers a scalable and flexible solution for managing the challenges posed by foundation models, multi-agent systems, and other advanced AI technologies, ensuring that innovation is supported by effective and proportionate regulation. The full article is available as a preprint via arXiv.
Human-AI Governance (HAIG): A Trust-Utility Approach
In her solo-authored paper, Dr Engin lays the conceptual foundation for dimensional governance through the development of the Human-AI Governance (HAIG) framework. This work focuses on the evolving trust dynamics between humans and AI systems and argues that effective governance must not only mitigate risk but also calibrate trust in ways that maximise both societal benefit and institutional legitimacy.
The HAIG framework uses the same three dimensions—Decision Authority, Process Autonomy, and Accountability—but extends them across continua and identifies thresholds where changes in system capabilities or context demand new governance responses. It frames governance as a dynamic process and introduces a trust-utility orientation, which focuses on building appropriate and resilient trust relationships rather than applying fixed rules.
Drawing on case studies in healthcare and European regulation, the paper demonstrates how the HAIG model complements existing policy instruments, such as the EU AI Act, while also offering an anticipatory and context-sensitive alternative that adapts to the pace and direction of technological change. This article is also available as a preprint via arXiv.
Together, these publications represent a significant step forward in shaping how we understand and regulate the new generation of AI technologies. They reflect Data for Policy CIC’s commitment to building inclusive, and evidence-based frameworks for technology governance and invite continued engagement from policymakers, scholars, and practitioners across disciplines and sectors.