Special Track 3
Security and Justice Policy in the Age of Algorithms and AI
Special Track Chairs:
- Brecht Weerheijm, Leiden University
- Ritten Roothaert, VU Amsterdam
- Sarah van Gerwen, VU Amsterdam
Description
The use of algorithms, artificial intelligence (AI) and big data analytics in the domain of security and justice has gathered much attention from the general public, policymakers and researchers over the past years (see e.g. Perry et al., 2013). On the one hand, algorithmic systems are promoted to create a less biased approach to law enforcement and offer possibilities to increase the effectiveness and efficiency of security agencies (Brayne, 2017,pp. 981–982). On the other hand, there are ample worries that the use of ‘black box’ algorithmic systems may result in unfair, unaccountable, unexplainable decision-making (e.g. Mehozay & Fisher, 2019). In short, different values collide when algorithmic- and AI supported decision-making is implemented in the security sector (e.g. Meijer et al., 2021). Ensuring a correct fit between technology and the nature of the work in the security domain is thus not only essential for establishing trustworthy public organizations, but also to establish sustainable cooperation between human decision-makers and automated tools (Akata et al., 2020). This goes beyond the domain of policing, since there are ample other organizations in the broader security domain (with different levels of visibility to the public) engaged in algorithmic- and AI supported decision-making, such as intelligence agencies (e.g. Dorton & Harper, 2022; Vogel et al., 2021) and the justice domain (e.g. Christin, 2017), warranting more cross-domain research.
With uncertainty about the implications of the EU’s AI Act on matters of security and law enforcement (Levano, 2024), questions of ensuring how security policies can ensure that public values are secured, meaningful collaboration between humans and machines can be established and effective accountability mechanisms are put in place become all the more relevant. At the same time, policies should also be designed in such a way that they ensure that the potential of AI and algorithmic systems for security are not wasted.
This special track aims to address these questions by inviting contributions from various perspectives, such as public administration, computer science, legal studies and ethics. This track will provide an opportunity for different disciplines to connect to study how effective policies can be designed for fair and technically sound AI and algorithmic-supported decision-making.