The Data for Policy 2016 conference began with strong support from major institutions such as the University of Cambridge and the European Commission. Building on the success of last year, five major themes formed the basis for discussion throughout the conference: the revolution in government service provision using ‘Big Data’, data collection and its ethical implications, computational methods and technical challenges, methods of data collection and modelling, new and emerging sources of data collection and finally, the application of data to various policy areas both new and old. The conference, which is now in its second year, provided a space for ideas and people to mix and for a healthy discussion to take place which allows academics, policy makers and technical experts to interact with people from beyond their own disciplines.
In the words of Professor Anthony Finkelstein, Chief Scientific Advisor for National Security to the UK government:
“Data for Policy provides a unique opportunity to have a dialogue across many disciplinary boundaries, it’s a forum in which we can ask questions and move collectively towards solutions, we can share methods. I think it’s invaluable.”
Professor Enrico Giovanini from the University of Rome ‘Tor Vergata’ added that conferences like Data for Policy “are very important to share good practices and identify the latest developments.” Meanwhile, David Mair, a Head of Unit at the Joint Research Centre of the European Commission appreciated the collaborative aspects of the conference:
“It’s fantastic to come here and take the temperature of what’s going on in this field and hopefully out of this we can begin to look ahead and see where the new connections are going to be made between ideas and data science.”
Attendees came from a wide range of disciplines and institutions, from Professor Andrew Blake of the Alan Turing Institute to Maive Rute, the Deputy Director-General of the Joint Research Centre at the European Commission. This close co-operation of policy makers and data scientists is unique in the UK and indeed, the world. Prestigious institutions such as the United Nations, the Office for National Statistics and the World Bank were represented among the attendees. Special sessions were also organised and held on a separate basis throughout the conference by teams from Technopolis Group, GovLab, Leiden University and the Bank of England. Technopolis Group organised a session chaired by Cristina Rosemberg which discussed the use of ‘Big Data for Science and Innovation Policy’. The presenters for this group discussion were from the National Physical Laboratory, Nesta and Innovate UK. The Bank of England also organised a session, which discussed a number of different topics including the use of big data for systemic risk assessment and the ‘History Dependence in the Housing Market”.
The main sessions which took place throughout the conference were formed through a rigorous peer-review process after a Call for Proposals was issued in March. A large number of submissions were received, of which, only 50% were selected for presentation. The opening plenary session featured talks from number of contributors. In her opening lecture Helen Margetts, director of the Oxford Internet Institute discussed ‘Platform Government and a Data Science for Policy-making’ while Philip Treleaven, Professor of Computing and Director of the Financial Computing Centre at UCL gave a talk entitled ‘Algorithmic Regulation: Using Blockchain smart contract technology to revolutionise Financial Regulation’. The main body of the conference material was formed eighteen regular panels discussing a variety of topics. Dr. Rayid Ghani, Director of Data Science and Public Policy at the University of Chicago gave a lecture entitled ‘Doing practical data science for social good and public policy’. Quentin Palfrey, Executive Director of J-PAL at the Massachusetts Institute of Technology and a former senior advisor at the White House highlighted the benefits of ‘Big Data’ in reducing poverty and inequality in his talk entitled ‘Translating Rigorous Evidence into Policies that Benefit the Poor’.
“One of the things that’s difficult in determining whether a social programme works is knowing what would have happened if you hadn’t had the intervention,” said Mr Palfrey, “so what we do is take the methods of scientific inquiry that have transformed medical science, such as randomised control trials and we apply them to large-scale social programmes. Conferences like Data for Policy are very helpful tools for increasing the dialogue between the scholarly community and policy makers about what works, about what resources exist to help inform policymaking and also to teach policy makers how to be effective producers and consumers of evidence.”
The conference ended with a closing plenary session which involved a number of different speakers. One such speaker was Barbara Ubaldi from the Organisation for Economic Co-operation and Development, whose lecture was entitled ‘A Data-driven Public Sector for Sustainable and Inclusive Governance’.
“I think the biggest challenge in the next five years…is moving from concepts to reality,” said Ms Ubaldi, “this means focusing on implementation – we need to implement these ‘nice’ ideas. This requires greater connectivity between the public and private sectors, as well as academia. Conferences like Data for Policy are good because on the one hand raise awareness and on the other they bring together stakeholders that normally would not interact with one another and they bridge the interactions – like how to make concrete use of data and technology to inform policy issues.”
A number of interesting points emerged over the course of the conference. Maive Rute from the European Commission drew attention to a “growing scepticism among the public towards scientific experts and the reliability of data and science in general in a ‘post fact’ climate” – a broad and cogent argument, especially given recent political developments around the world that have shown a growing unwillingness to heed the advice of experts. Similarly, in his talk entitled ‘Data for Policy: A Myth or a Must?’ Enrico Giovannini claimed that “the value added by statistics depends on their relevance, trust in the media and the consumers’ literacy’ and went on to say, very powerfully, that “data is the lifeblood of democracy.” In his popular and witty presentation, Professor Jim Waldo of Harvard University, highlighted the problem more succinctly when he said “in data science, the science is only as good as the data”. Building on this theme and focusing more on praxis, other speakers such as Sarah Geist of the University of Leiden posed the question: “why don’t computer scientists know how policy works?” and Dr. Rayid Ghani from the University of Chicago spoke for many when he said that “training is needed on both sides – computer scientists and government officials – to use data effectively.”
Alongside the conference a poster competition was also run featuring presentations by students from University College London, Carnegie Mellon University, Birmingham University and many other institutions. The overall winner of the competition, which relied on votes from attendees over the course of the conference, was Maria De-Arteaga from Carnegie Mellon University for her poster entitled ‘Discovery of Complex Anomalous Patterns of Sexual Violence in El Salvador’.
The Data for Policy 2016 conference ran from the 15th – 16th September. The conference took place in the pleasant surroundings of the University of Cambridge’s Computer Laboratory, making use of its modern lecture halls and superb catering facilities – attendees were well taken care of and had many opportunities to mingle informally and share ideas.
The conference also had extensive multimedia services including video platforms and social media such as Twitter. Videos of all the keynote and plenary sessions can be found on the Data for Policy YouTube channel alongside individual interviews with key participants with further information and real-time commentary available on the Data for Policy Twitter account.
Data for Policy Twitter: https://twitter.com/dataforpolicy
Data for Policy website: https://dataforpolicy.org/
Data for Policy YouTube: https://www.youtube.com/channel/UCsJUrj-FJ4qT9eNuTOPyYNQ
(Report by Nathaniel Hayward)
Data for Policy 2016 – Frontiers of Data Science for Government: Ideas, Practices, and Projections
15-16 September 2016, Cambridge
Data Science is emerging as a key interdisciplinary research field to address major contemporary challenges across sectors. Particular focus on the government sector offers huge potentials to advance citizen services and collective decision-making processes. To reflect the diversity of skills and knowledge required to tackle challenges in this domain, the conference offers an open discussion forum for all stakeholders. Data for Policy 2016 invited individual and/or group submissions from all relevant disciplines and application domains. Topics covered included but were not limited to the following:
- Government & Policy: Digital era governance and citizen services, public demand vs. government response, using data in the policy process, open source and open data movements, policy laboratories, citizen expertise for government, public opinion and participation in democratic processes, distributed data bases and data streams, information and evidence in policy context, case studies and best practices.
- Policy for Data & Management: Data collection, storage, and access; psychology/behaviour of decision; privacy, trust, public rights, free speech, ethics and law; data security/ownership/linkage; provenance, curation, expiration; private/public sector/non-profit collaboration and partnership, etc.
- Data Analysis: Computational procedures for data collection, storage, and access; large-scale data processing, dealing with biased/imperfect/uncertain data, human interaction with data, statistical/computational models, technical challenges, communicating results, visualisation, etc.
- Methodologies: Qualitative/quantitative/mixed methods, gaps in theory and practice, secondary data analysis, web scraping, randomised controlled trials, sentiment analysis, Bayesian approaches and graphical models, biologically inspired models, real-time and historical data processing, simulation and modeling, small area estimation, correlation & causality based models, and other relevant methods.
- Data Sources: Government administrative data, official statistics, commercial and non-profit data, user-generated web content (blogs, wikis, discussion forums, posts, chats, tweets, podcasting, pins, digital images, video, audio files, advertisements, etc.), search engine data, data gathered by connected people and devices (e.g. wearable technology, mobile devices, Internet of Things), tracking data (including GPS/geolocation data, traffic and other transport sensor data, CCTV images etc.,), satellite and aerial imagery, and other relevant data sources.
- Policy/Application Domains: Security, health, cities, public administration, economy, science and innovation, finance, energy, environment, social policy areas (education, migration, etc.) and other relevant domains.
- University of Cambridge – Computer Laboratory, Centre for Science and Policy, Cambridge Big Data Strategic Research Initiative, Digital Humanities Network, Cambridge Public Policy Initiative
- European Commission • Alan Turing Institute • Imperial College London – Data Science Institute
- London School of Economics & Political Sciences – Department of Methodology
- University College London – Department of Computer Science, UCL Public Policy, The Bartlett – UCL Faculty of the Built Environment
- University of Oxford – Oxford Internet Institute
- Office for National Statistics
- Royal Statistical Society • New York University – The GovLab, Open Governance Research Exchange
- Leiden University – Centre for Innovation
- Technopolis Group
Multimedia from Data for Policy 2016 Conference:
The 2016 Data for Policy was held at the University of Cambridge