Area 4: Ethics, Equity, and Trustworthiness
Ethics, Equity, and Trustworthiness
The rapid adoption of large language models (LLMs) and other AI technologies has sparked significant debate among experts and the public. While some caution that AI poses existential risks on a global scale, others emphasize more immediate concerns, such as the concentration of power, systemic inequalities, and the potential harm to the information ecosystem and the environment.
The opacity and complexity of data-driven insights, coupled with the increasing role of artificial intelligence in social and policy contexts, mark the distinct nature of the current data science and AI revolution. Unlike previous technological disruptions that primarily transformed economic and social structures, the impact of AI extends to human intelligence, knowledge generation, and cognitive capabilities. As a result, there is growing public and policymaker concern about the unintended consequences, the potential for malicious use, and the possibility of losing human control over these technologies.
There is a rising awareness that data-driven technologies may replicate and even exacerbate existing problems in human decision-making processes. These issues include systemic inequalities, power imbalances, exploitation, and the reinforcement of societal divisions. Addressing these challenges has become a computational task, intersecting with areas traditionally explored by the social sciences and humanities.
Emerging fields within AI, such as algorithmic fairness, aim to ensure that data-driven systems behave in a “fair” manner by analyzing and embedding ethical considerations into machine learning processes. Other important areas of focus include algorithm transparency, explainability, and interpretability (Weller 2017), which seek to make AI systems more understandable and trustworthy. Interoperability (Brown and Korff 2022), accountability (Binns 2018), and contestability (Lyons, Velloso and Miller 2021) are also gaining attention as key factors in ensuring that AI technologies are aligned with societal values and can be effectively governed.
These developments highlight the need for ongoing dialogue and collaboration between policymakers, technologists, and the public to navigate the ethical and social implications of AI and data science. Understanding public attitudes towards data privacy and trust, especially in the wake of global events like the COVID-19 pandemic, is crucial for shaping future policies that protect and empower all demographic groups.
Area 4 Committee Members:
- Tristan Henderson, University of St Andrews, UK
- Alexander Monea, George Mason University, USA
- Mustafa Ozbilgin, Brunel University, UK
- Jeannie Paterson, University of Melbourne, Australia
- Nydia Remolina Leon, Singapore Management University, Singapore
- Adrian Weller, The Alan Turing Institute, UK
Copyright © 2024 – All Rights Reserved.
Copyright © 2024 – All Rights Reserved.
Copyright © 2024 – All Rights Reserved.