Al, Anti-Discrimination Law, and Your (Artificial) Immutability

Al, Anti-Discrimination Law, and Your (Artificial) Immutability

How could a personal characteristic like eye movement affect, say, whether you get a loan?


Host Kirsten Martin is joined by Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute (OII) at the University of Oxford. She founded and leads OII’s Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies.


Sandra came on the show to talk about her paper “The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law,” which is forthcoming in the Tulane Law Review.


Most people are familiar with the idea of anti-discrimination law and its focus on protected-class attributes—e.g., race, national origin, age, etc.—that represent something immutable about who we are as individuals and that, as Sandra explains, have been criteria humans have historically used to hold each other back.


She says that with algorithms, we’re now being placed in other groups that are also largely beyond our control but that can nevertheless impact our access to goods and services and things like whether we get hired for a job. These groups fall into two main categories: people who share non-protected attributes—say, what type of internet browser they use, how their retinas move, dog owners, etc.—and people who share characteristics that are significant to computers (e.g., clicking behavior) but for which we as humans have no social concept.


This leads to what Sandra calls “artificial immutability” in the attributes used to describe us, or the idea that there are things about ourselves we can’t change not because they were given by birth but because we’re unaware they’ve been assigned to us by an algorithm. She offers a definition of what constitutes an immutable trait and notes that there can be legitimate uses of them in decision-making, but that in those cases organizations need to be able to explain why they’re relevant.

Listen to the Episode

Presented by Notre Dame Technology Ethics Center

Additional Resources

Presented by Notre Dame Technology Ethics Center

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Sandra highlighted University of Cambridge psychologist Amy Orben and her research on online harms, particularly in the context of young people’s use of social media.

back to top