Special Track 4
Title: Ethical Technology Adoption in Public Administration Services
Francesco Mureddu, The Lisbon Council (corresponding author)
Giovanna Galasso, PwC IT Consulting Public Sector
Francesco Paolo Schiavo, Ministry of Economics and Finance
The so-called Disruptive Technologies (DTs) show nowadays the possibility to increase dramatically the efficiency of the business processes typically operated by Public Administrations (PAs) and public service providers. In fact, these technologies allow the enhancement of data and information used for decisions making, as well as the automation of processes. At the same time, Disruptive Technologies are often interaction-based solutions developed by making extensive use of Learning Functions, and this makes them particularly prone to ethics-related issues.
Available studies – produced both by academic and research organisations and by institutional studies – on the responsible adoption of Disruptive Technologies are still mostly theoretical. However, scientific advancements and practical solutions in this domain are fundamental to unlock the use of DTs in line with the principles recommended at EU level (which are often shared at national level), including, among others, the attention to accountability, privacy, sustainability, trustworthiness, security, transparency and interpretability, openness, fairness, safety, or the respect of human agency. Looking upon these aspects reduces significantly the exposition to unnecessary risks of malicious exploitation and manipulation and unforeseen consequences thus, taking full stock of the DTs potential. Such an assessment requires identifying the specific risks that can arise from violating ethical principles (such as performance risks due to model biases, security risks possibly arising from adversarial attacks, or control risks deriving from unexpected behaviours), and defining a way to measure the caused social impacts. Another compelling characteristic of DT-based applications/systems (DTAs) is their capability of directly interacting with humans. On the one hand, this yields immediate and easy engagement, similar to what users are increasingly accustomed to deal with in everyday life. On the other hand, it raises critical concerns related to the potential physical and psychological harm they can cause. The social impacts of DTs – either positive or negative – and ethical risks related to their usage are manifold and require a thorough and multidisciplinary analysis to maximise the benefits while minimising the risks.
The legal landscape related to DTAs is still not clear and well-defined. Although many regulations – e.g. GDPR, General Product Safety Directive, Product Liability Directive – provide a general legal framework for DTA, they show limitations and do not completely cover the technology-driven emerging risks.
At this point comes the question:
- How can we correctly evaluate and govern the ethical implications of DTs adoption by Public Administrations and Public Service providers, taking into account the impact that these technologies can have on citizens’ lives? How can we securely manage DTs and the data they process?
The question will be addressed by the recently started (November 2020) project ETAPAS – Ethical Technology Adoption in Public Administration Services Analytics (H2020- DT-TRANSFORMATIONS-02-2020) by introducing three main tools:
- A Responsible Disruptive Technology Framework (RDT Framework) including a Code of Conduct (CoC) setting out ethical principles, an overview of ethical risks and social impacts of DTs, RDT indicators to practically measure those risks and impacts, and a European legal framework;
- A Governance model providing PAs with guidelines for the identification of the relevant indicators and metrics for the DT assessment, for the definition of an accountability model, for the measurement, monitoring, and analysis of DTAs and guidelines about risk mitigation actions;
- A prototypical software platform for ethical assessment that, based on the defined conceptual framework, will support the assessment of DT-based applications for public services by measuring their respective ethical risk levels and evaluating the effectiveness of the corresponding mitigation actions.
Our approach aims at making these tools as practical as possible to support PAs with a responsible adoption of DTs through a careful assessment of the trustworthiness of their development and use. This includes the analysis of a large variety of multi-disciplinary aspects, spanning over technical – e.g. security and accountability of employed algorithms – ethical and legal implications – e.g. value-aligned actions, discrimination of groups, and data privacy protection – social and governance ones – e.g., transparent and interpretable decision-making.
The project will focus on those DTs that will be tested in the selected use cases;
- Ethically Responsible Big Open Data: ETAPAS framework will identify possible ethical issues/risks and impacts, and propose mitigation actions and recommendations in relation to the big data analysis and publication procedure of anonymized open datasets in line with the legal framework, including the GDPR;
- Municipality chatbot: The chatbot Kari is an AI-based virtual agent available to the citizens of 80 Norwegian local municipalities – covering about 30% of the Norwegian population – to answer citizen’s questions of relevance such as general information on the municipality services. The chatbot Kari has substantial natural language understanding capabilities, based on underlying AI models, ETAPAS framework will assess – among the others – the possible risk of the chatbot to discriminate several thousand intents from users’ free text input.
- Public Organizations Multi-factor Misinformation Handling. The deployment of AI for fake news detection and prioritization of emerging issues in the municipality of Katerini will be assessed by the ETAPAS framework to identify all relevant ethical and social risks and propose mitigation actions, eliminating any relevant risks and unforeseen consequences.
- Robot – mediated rehabilitation. The use case will focus on robots used for the assessment of patients’ walking abilities. Human-machine interaction poses various ethical, social and legal challenges entailed in terms of verbal and physical interaction, psychological relationship, and data management. These will be assessed by the ETAPAS framework.
Clearly the work carried out in the project is relevant to the conference as the three use cases display the use of data for decision making in public administration.
The choice is in line with the new EC’s “Shaping Europe’s Digital Future” strategy launched in February 2020, namely:
- The European Strategy for Data highlights that the success of Europe’s digital transformation in public and private sector over the next five years will depend on establishing effective frameworks to ensure trustworthy technologies;
- The same strategy also supports data sharing, in compliance with ethical principles and in full respect of the privacy and security of citizens, stating that data generated by the public sector and the value created should be available for the common good by ensuring that these data are used by researchers, other public institutions, SMEs or start-ups. This is also coherent with the recommendations of the Report “Towards a European Strategy on business-to-government data sharing for the public interest”.
- Finally, the EC White Paper on Artificial Intelligence – A European approach to excellence and trust, also supports “the development and uptake of ethical and trustworthy AI across the EU economy. AI should work for people and be a force for good in society” as well as stresses the importance that public administrations, hospitals, …, and other areas of public interest rapidly begin to deploy products and services that rely on AI in their activities.
 Class of algorithms and models that allows a machine to infer how to perform a task from a given set of examples, thus without executing explicit instructions. Examples of Learning Functions are Decision Trees, Neural Networks, Support Vectors machines, etc.
 Patil, D.J., Mason, H., Loukides, M., Ethics and Data Science, O’Reilly Media Inc. 2018
 Princeton, Dialogue on AI and Ethics. Workshop Series on AI and Ethics, 2017-18
 Siau, K., & Wang, W., Building Trust in Artificial Intelligence, Machine Learning, and Robotics. Cutter Business Technology Journal, 31(2), 47-53. 2018
 European Commission, The European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). “A definition of AI: Main capabilities and scientific disciplines”, 2019,
 European Commission, “Statement on Artificial Intelligence, Robotics, and ‘Autonomous’ Systems”. 2018.
 European Commission, The European Group on Ethics in Science and New Technologies, “Statement on Artificial Intelligence, Robotics, and ‘Autonomous’ Systems”, 2018
 The ability of a system to explain its decision-making processes to humans in a clear and comprehensible manner.
 The ability of a system to treat individuals within similar groups in a fair manner, without favouritism or discrimination, and without causing or resulting in harm whilst maintaining respect for the individuals behind the data and refraining from using datasets that contain discriminatory biases
 The ability of a system does not compromise the physical safety or mental integrity of humans throughout its operational lifetime.
 European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, 2020
 Eisenberger I., “Das Gesetz der Technik: Recht und Innovation” (“The Law of Technics: Law and Innovation), 2019
 Barcevičius, E., Cibaitė, G., Codagnone, C., Gineikytė, V., Klimavičiūtė, L., Liva, G., Matulevič, L., Misuraca, G., Vanini, I., Editor: Misuraca, G., Exploring Digital Government transformation in the EU – Analysis of the state of the art and review of literature, EUR 29987 EN, Publications Office of the European Union, Luxembourg, 2019, ISBN 978-92-76-13299-8, doi:10.2760/17207, JRC118857.