Critiquing and Rethinking Accountability, Fairness, and Transparency

By Seda Gürses, Seeta Peña Gangadharan, and Suresh Venkatasubramanian

In this post, Seeta Peña Gangadharan joins forces with Seda Gürses and Suresh Venkatasubramanian to briefly canvas research and works that challenge studies of fairness, accountability, and transparency in statistical and automated decision systems.

FAT* stands for fairness, accountability, and transparency, and it refers to a growing field of work automated and statistical decision making algorithms that can be deployed in information systems. Such systems

filter, sort, score, recommend, personalize, and otherwise shape human experience, increasingly making or informing decisions with major impact on access to, e.g. credit, insurance, healthcare, parole, social security, and immigration.  

ACM FAT* conference website

The field aims to recognize and mitigate potential unfair outcomes of such systems, often by using mathematical and legal mechanisms that affect the design of algorithms and the use of underlying datasets. This work mostly comes out of academia and research institutions, and has received much public attention. Proposed mechanisms are increasingly leveraged by private or public entities, often also in collaboration with civil society initiatives. 

The success of FAT* has been remarkable, and especially of interest to industries that build such systems. At the same time, the field’s success has attracted much critique and renewed attention to the limitations of achieving the goals of fairness, accountability, and transparency goals in these data-driven systems.

A number of works acknowledge that addressing societal problems embedded in such computing systems may require more holistic approaches. These works involve a variety of approaches—i.e., not just academic research, and they appeal to diverse theories, frameworks, and histories that challenge and expand the scope of FAT* studies.

For example, publishing seminal work in the 1990s, Gandy paved the way for subsequent generations to think about computer decision systems in relation to social control, money, and path dependencies. More recently, Barocas calls industry’s adoption of fairness in machine learning as “whitewashing,” while Nissenbaum and Powles state that when companies say they’ll fix biased A.I., they still accumulate wealth and power through deploying “fixed” systems. Overdorf et al. argue that companies that deploy such systems lack motivation and, at times, the ability to address harmful, unintended consequences of automated systems that “optimize” their results to their profit interests (think: side effects on local residents when a navigation app reroutes traffic through their neighborhood or under-serve already marginalized neighborhoods).

In other instances, critics draw from alternate theories of social justice to call out tech solutionism. For instance, Hoffman criticizes FAT* frameworks for “mirroring some of anti-discrimination discourse’s most problematic tendencies.” Gangadharan, Keyes, and Benjamin have drawn “red lines” around the development of automated systems, calling for refusal, moratoriums, and abolition of systems deployed in contexts that deepen oppression and domination against marginalized groups.

Reflecting history and context of FAT*, some critics are explicitly questioning assumptions in technical approaches by drawing insights from historical examples (e.g., educational testing, Hutchinson and Mitchell) and incorporating messy social contexts (Selbst et al.) into development processes. Others underline the importance of lived experience (Abdurahman, Eubanks), problems of tech “solutionism” (Morozov) and technocentric elitism (Constanza-Chock, Gangadharan), as well as propose to take a longer view on the “algorithmic reiteration” of systemic forms of subordination (Ali, Chun, Lentin). Theoretical reflections on alternatives based in data justice (Dencik et al. Heeks and Renken, Taylor) and human rights (McGregor et al.) frameworks have also found echoes in events organizing strategies of resistance (see Bandung du Nord, AsiaFAT, Data4BlackLives, Our Data Bodies).

In the coming months, as more people engage in this line of thinking (such as CRAFT, a program we’re co-organizing and that aspires to bring these different voices together in crafting further reflections and critique), we plan to make a fuller bibliography of essential reading that encompasses this growing body of knowledge and practice.