Technology Ethics Conference 2020: Keynote

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Mark P. McKenna, John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center, University of Notre Dame
  • Cathy Oā€™Neil, Author of the New York Times bestselling book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

The first virtual event in the Technology Ethics Conference and last virtual event in the Numbers Can Lie series was the keynote presentation given by Cathy Oā€™Neil, author of the New York Times bestselling ā€œWeapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.ā€ The event was moderated by Mark McKenna, the John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center. The keynote highlighted Oā€™Neilā€™s research and work related to data science and algorithmic bias. Oā€™Neilā€™s address focused on what algorithms are and how to understand them, how algorithms are used in the real world, and finished with what needs to be done to address ethical issues in algorithmic bias and what the future might hold with this technology.Ā 

In the first part of Oā€™Neilā€™s keynote address, she took the time to help clarify what she means when she talks about algorithmic bias. The definition she provided is that bias is opinions embedded in code rather than scientific facts. Because they are opinionated, they often misdirect and can become disruptive. Oā€™Neil asserted that people use predictive algorithms every day. For example, when one chooses what to wear in the morning, they have a bias and an agenda as to what they choose to wear and why. They know from previous experience what looks good on them for what occasion. This agenda will differ from someone else’s and each person will have a different definition of success after they get dressed.Ā 

Oā€™Neil believes that algorithmic biases are typically widespread, mysterious, and destructive. They are important and affect many, people often do not know how they are created or what they measure, and finally, they can be unfair towards certain people. The major problem with algorithmic bias is that scientists often do not know they are there when they are created. One example of algorithmic bias in the real world is in job hiring. Some companies use an algorithm to filter applications. Amazon, for example, developed an algorithm that scored resumes and that was discovered to be sexist. If a resume uses the word ā€œexecute(ed),ā€ that application would get a higher score than others. On the other hand, if the resume said ā€œwomenā€™sā€ that was a downgrade to the score. This was obviously a destructive and unfair algorithm. Oā€™Neil provided other unique examples of algorithmic bias used in scoring school teachers and used in job hiring.Ā 

The keynote finished by discussing the ethical dilemmas surrounding algorithmic bias and how to prevent them. Oā€™Neil first expressed that if an individual is being scored by an algorithm, they should have access to that score and a reason for why they scored what they did. She also suggested more regulation before an algorithm is deployed or a certain amount of human oversight on these algorithmic systems to catch errors or biases. Oā€™Neil asserted that an ethical framework should be standard practice when creating new algorithms. People need to ask the hard questions like ā€œfor whom does this workā€ and ā€œfor whom does this fail.ā€ A conversation involving multiple stakeholders to discuss these issues is a critical part of reforming the algorithms before they are made.Ā 

Visit the event page for more.


  • Algorithms are opinions in code rather than factual information (5:20)
  • Algorithms are widespread, mysterious, and destructive (10:41)
  • Algorithms are often scoring systems used in hiring processes and employee ratings (12:20)
  • There arenā€™t many regulations on algorithms before they are deployed (34:00)
  • A cross-disciplinary conversation between different stakeholder will help lead us to fairer algorithms and less bias (45:55)

  • ā€œAlgorithms are opinions embedded in codeā€ (Cathy Oā€™Neil; 5:20)
  • ā€œMost of the data scientists who build these algorithms donā€™t even realize that there is bias data in thereā€ (Cathy Oā€™Neil; 20:00)
  • ā€œWhen we have a score of ourselves that has a high impact on us, like our jobs or our mortgages, we should have access to that scoreā€ (Cathy Oā€™Neil; 33:05)
  • ā€œTechnology does not exist in a vacuum. It’s socially embedded and used for certain purposes. It’s not always the technologists who are building it who are best situated toĀ  evaluate how that is going to be used or what its effects will be. A cross-disciplinary conversation is critical”Ā  (Mark McKenna; 50:55)