After little more than half a century since its initial development, computer code is extensively and intimately woven into the fabric of our everyday lives. From the electronic toothbrush packed with microcontrollers, towards the controlling and guiding of a whole urban infrastructure. Algorithms, code and digital data are shaping our understanding of the world: they enable us to reach a broader network of information, get in touch with other individuals or cultures and due to automation, we can speed up existing practices and bring up new, even traditional unknown forms of interacting with each other and technology (see Kitchin and Dodge 2011). Computer code is deeply anchored in how a modern society is designed, we often don’t even see the programmed architectures surrounding us (see Stalder 2018). Furthermore, it plays a crucial role regarding the ongoing societal challenges in general and the implication of algorithmic decision-making in particular (see Eubanks 2018).
The problematic field that arises can be described in the relation between the pols of disruption and synthesis. While algorithms can have a disruptive impact, they can also bring together very heterogeneous data, cultures and social contexts. The complexity comes with a kind of paradoxical structure of algorithms, code and social as well as automated interaction.
Following the outlined problem, the topic of this proposal is to discuss the quality of automated mechanisms that could lead to exclusion and how to confront these, by focusing on bias in algorithms and how to develop practices against bias in algorithmic systems. There is a wide variety of bias in computational systems, like a boarding software, that does not implement options for handicapped individuals at the airport, voice recognition, that discriminates female over male voices or accidentally revoked health care coverage due to algorithmic mechanisms. It can be observed, that algorithms can lead to a silent exclusion of various social groups in different contextual settings.
The research question addresses this highly topical issue asking for methods and practices to foster diversity through tinkering and hacking algorithmic systems in non-formal settings. How can tinkering with algorithms help raising awareness for diversity and therefore identify bias in a specific computational system, tool or service? How can media pedagogy engage in such a unique technical field and foster empowerment in a broader sense of code literacy?
The talk will then focus on specific manifestations of algorithmic decision-making and its implications for social exclusion. Arguing that through the process of addressing diversity issues in algorithmic systems and architectures, it can help to estimate the complexity of algorithmic systems and to strengthen the understanding of these systems in general.
One of the key arguments is, that we cannot rely on formal educational processes and efforts, since they only re-produce common and traditional elaborated patterns (see Allert and Richter 2016), but the societal context has shifted towards a digital culture already. Therefore, it is important to bring in new perspectives on how to discuss the complexity of algorithms and their impact on social structures. Tinkering with computer code but also with hardware assets can be understand as creative learning practice which is due to mass production affordable and accessible to a wide audience.
Diversity becomes relevant in two ways: On the one hand, it can be seen as an effort to deconstruct existing systems and on the other hand, it might help to bring in new perspectives and come up with innovative and engaging ideas to finally foster diversity, work against social exclusion and gain at least a bit of individual autonomy.