July’25 Fireside Chat

Event Summary

How can we better understand and govern the information flows within AI systems?

In this thought-provoking Fireside Chat, Dr Zeynep Engin (Director of Data for Policy CIC and Editor-in-Chief of Data & Policy) speaks with Dr Ilan Strauss (Program Director of the AI Disclosures Project at the Social Science Research Council) and Sruly Rosenblat (Researcher for the AI Disclosures Project, SSRC) about the evolving challenges and opportunities in AI transparency.

Together, they explore how AI systems particularly large language models use, process, and produce information, and how making these processes more visible could shift power dynamics in digital markets.

Discussion Highlights

The session introduced key concepts and frameworks that can support greater transparency and accountability in AI systems;

Chain-of-Thought Reasoning: What is it, how do models use it, and how is it being trained and moderated? The speakers explore how chain-of-thought logs offer insight into a model’s reasoning—yet these logs are often hidden or summarised in ways that obscure the model’s real process.

Attribution Gaps in Web-Enabled Models: Web-connected LLMs increasingly draw on online sources to answer user queries but they often fail to cite the full range of sources they consult. Dr. Strauss and Sruly Rosenblat present new findings from their research on the “attribution gap”: the mismatch between the number of sources consumed and those cited.

Citation Efficiency & Model Design: Using data from the LMArena dataset, the speakers show how different models vary in their citation efficiency and content attribution practices based on factors like model architecture, local relevance, and moderation constraints.

Observability Layers & Traces: Why are traces, logs, and metrics vital for AI transparency? The conversation highlights how observability tools like backend traces showing which websites were visited and which sources were most “relevant” can serve as building blocks for accountability, monetization, and ethical oversight.

Protocols as “Rules of the Road”: Referencing Tim O’Reilly’s (Co-Director of the AI Disclosures Project at the SSRC) work, the speakers describe how technical protocols help structure the digital environment, shape interoperability, and establish norms of behavior laying the groundwork for transparency and market fairness.

Countervailing Power in the Age of AI: Drawing on the work of economist John Kenneth Galbraith, the chat proposes that transparency tools and information flows could help form institutional counterweights to the dominance of major AI providers. Examples include ad blockers, bot-blocking systems, and content negotiation platforms.

This conversation blends economic insight, technical depth, and policy relevance, making it essential viewing for researchers, technologists, and policymakers working at the intersection of AI, information governance, and market design.

Further Reading:

We look forward to the next session in our Fireside Chat series, where we will continue to explore critical issues shaping the future of data governance, AI regulation, and policy innovation.

About the Speakers:

Dr. Ilan Strauss is Program Director of the AI Disclosures Project at the SSRC. He is an Honorary Senior Fellow at the UCL Institute for Innovation and Public Purpose (London), where he was head of digital economy research on a multi-year Omidyar Network funded research project. He is a Visiting Associate Professor at the University of Johannesburg. Ilan was the joint recipient of an Economic Security Project grant investigating Big Tech’s acquisitions of technological capabilities. He previously taught macroeconomics at New York University (Division of Applied Undergraduate Studies) and at Rice University (Jones Graduate School of Business), and has consulted widely for United Nations bodies. Sruly Rosenblat is a researcher for the AI Disclosures Project. He recently graduated from Hunter College, where he received a bachelor’s degree in computer science. He has long enjoyed experimenting with language models and is always looking forward to the opportunity to dive further.

Sruly Rosenblat is a researcher for the AI Disclosures Project. He recently graduated from Hunter College, where he received a bachelor’s degree in computer science. He has long enjoyed experimenting with language models and is always looking forward to the opportunity to dive further.