Algorithmic Justice League (“AJL”)

The Algorithmic Justice League (AJL) emerged in 2016 as a radical response to the unchecked proliferation of biased artificial intelligence. A decentralized, international initiative, AJL merges art, policy-making, advanced data analysis to demystify algorithmic oppression. Through projects combining computer science and art, AJL raises awareness about the potential for AI systems to perpetuate harmful biases and discrimination.

Founded by AI researcher Dr. Joy Buolamwini, AJL combines art and research to illuminate AI’s social implications. The initiative revolves around “algorithmic necromancy,” a practice of resurrecting obscured data histories: for instance, projecting facial recognition error rates onto police surveillance cameras using hacked projectors, or weaving error-laden AI training datasets into protest tapestries hung outside tech conglomerates. Generally, AI systems frequently perpetuate racism, sexism, ableism, and other harmful forms of discrimination, therefore, presenting significant threats to our society— from healthcare, to economic opportunity, to the criminal justice system. The Algorithmic Justice League combines art and research to illuminate the social implications and harms of artificial intelligence, ultimately exposing the racism and sexism baked into AI LLMs and algorithms of all sorts.

In 2020, AJL won the Award of Distinction for the Digital Communities category at Prix Ars Electronica; In 2021, AJL was named as one of the “10 most innovative AI companies in the world” by Fast Company; and in 2024, AJL received the Archewell Foundation Digital Civil Rights Award from the NAACP. Over 45 major press outlets have given AJL generous platforms as of 2025, including The New York Times, TIME Magazine, The BBC, Forbes, Bloomberg, and many others of the same caliber.

City

N/A

Country

Global

Region

Global

Year of Creation

2016

Featured Project

The Gender Shades Project
The Gender Shades Project began in 2016 as the focus of Dr. Buolamwini’s MIT master’s thesis, inspired by her struggles with facial detection systems and algorithms. In 2018, she and Dr. Timnit Gebru subsequently published the widely known “Gender Shades” paper. The research in this paper utilized the coding behind gender classification in computers (how computers determine the gender of an individual from a headshot image) to powerfully demonstrate algorithmic bias from IBM, Microsoft, and Face++. Subsequently, the Actionable Auditing research conducted by Dr. Buolamwini and Deborah Raji demonstrated similar disparities with Amazon. With over 8500 citations as of July 2025, the Gender Shades paper has shaped mainstream industry practice, policy agendas, and academic research around facial recognition technologies, algorithmic auditing, and possible harm carried out by AI algorithms. As part of the 5th Anniversary Celebration, the Algorithmic Justice League is releasing a commemorative visualization and will be made available to the public for the first time the original API responses. AJL selected Robert Williams as the first “Gender Shades Justice Award” recipient. Williams was wrongfully arrested in front of his family after a facial recognition system misidentified him. He has courageously spoken out about his experiences. As researchers and practitioners explore pathways to equitable and accountable AI, we must keep in mind that it is possible to create a world where data is not destiny and your hue is not a cue to dismiss your humanity.

Resources

“2024 Forbes Power Women Summit: Preserving Humanity in AI.” Forbes, 16 Sep. 2024, https://www.forbes.com/video/3d3470d2-b532-486e-814e-70f7db448672/2024-forbes-power-women-summit-preserving-humanity-in-ai/.

Buolamwini, Joy Adowaa, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of the 1st Conference on Fairness, Accountability and Transparency, edited by Sorelle A. Friedler and Christo Wilson, Proceedings of Machine Learning Research, vol. 81, 2018, pp. 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html.

Buolamwini, Joy. “How I’m fighting bias in algorithms.” TED, Nov. 2016, https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms.

LoJacono, Matt. “Dr. Joy Buolamwini on Algorithmic Bias and AI Justice.” Duke, Stanford School of Public Policy, 19 Feb. 2025, https://sanford.duke.edu/story/dr-joy-buolamwini-algorithmic-bias-and-ai-justice/.

Mosley, Tonya. “‘If you have a face, you have a place in the conversation about AI,’ expert says.” NPR, 28 Nov. 2023, https://www.npr.org/2023/11/28/1215529902/unmasking-ai-facial-recognition-technology-joy-buolamwini.

Samuel, Sigal. “Joy Buolamwini saw first-hand the harm of AI bias. Now she’s challenging tech to do better.” Vox, 20 Oct. 2022, https://www.vox.com/future-perfect/23365558/future-perfect-50-ai-joy-buolamwini-founder-algorithmic-justice-league.

Simonite, Tom. “It’s Justice League vs. Algorithmic Justice League in Court,” Wired, 9 Aug. 2019, https://www.wired.com/story/its-justice-league-vs-algorithmic-justice-league/.

Wang, Selina, et al. “The Bloomberg 50: The People Who Defined Global Business in 2018.” Bloomberg Businessweek, 10 Dec. 2018, https://www.bloomberg.com/features/2018-bloomberg-50/?embedded-checkout=true#the-ai-investigators.

“Why MIT researcher is calling for ‘algorithmic justice’ against AI biases.” CBC Radio, 2 May 2025, https://www.cbc.ca/radio/ideas/unmasking-ai-bias-algorithmic-justice-1.7531391.

Zielonka, Barbara Anna. “What is The Algorithmic Justice League (AJL)?,” AI Advisory Boards, 30 Apr. 2024, https://aiadvisoryboards.wordpress.com/2024/04/30/what-is-the-algorithmic-justice-league-ajl/.

More Information

IMPORTANT: Profile pages for all collectives are in permanent development and have been built using information in the public domain. They will be updated progressively and in dialogue with the organizations by the end of 2024. New features and sections will be included in 2025, like featured videos, and additional featured projects. Please contact us if you discover errors. For more information on mapping criteria and to submit your organization’s information to be potentially included in the database, visit this page

Scroll to Top