10.10 Keynote & Q&A, Dr Abeba Birhane, UCD / Mozilla Foundation
Talk Title: Automating Injustice
Complex adaptive systems (e.g., human behaviour and social systems) are inherently dynamic, messy, ambiguous, incompressible, non-determinable, and non-predictable. Due to their incompressibility, neither datasets nor models can capture complex systems in their entirety. Instead, large scale datasets and predictive models pick up societal and historical stereotypes and injustices and are marked with various failures. Subsequently, individuals and groups at the margins of society pay the highest price when AI systems fail, while the most privileged and powerful corporations benefit. Yet, discussions of AI ethics tend to be abstract and based on visions of alternative realities far-fetched, sci-fi based, and devoid of current concrete realities. In this talk, I: i) emphasize the challenges of modelling complex behaviour, ii) argue that equitable algorithmic systems need looking beyond technical solutions and require broader structural rethinking, and iii) highlight that visions of alternative realities need to be informed by and grounded in current realities.