Technology Ethics Conference 2020: Keynote

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Mark P. McKenna, John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center, University of Notre Dame
  • Cathy O’Neil, Author of the New York Times bestselling book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

The first virtual event in the Technology Ethics Conference and last virtual event in the Numbers Can Lie series was the keynote presentation given by Cathy O’Neil, author of the New York Times bestselling “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.” The event was moderated by Mark McKenna, the John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center. The keynote highlighted O’Neil’s research and work related to data science and algorithmic bias. O’Neil’s address focused on what algorithms are and how to understand them, how algorithms are used in the real world, and finished with what needs to be done to address ethical issues in algorithmic bias and what the future might hold with this technology. 

In the first part of O’Neil’s keynote address, she took the time to help clarify what she means when she talks about algorithmic bias. The definition she provided is that bias is opinions embedded in code rather than scientific facts. Because they are opinionated, they often misdirect and can become disruptive. O’Neil asserted that people use predictive algorithms every day. For example, when one chooses what to wear in the morning, they have a bias and an agenda as to what they choose to wear and why. They know from previous experience what looks good on them for what occasion. This agenda will differ from someone else’s and each person will have a different definition of success after they get dressed. 

O’Neil believes that algorithmic biases are typically widespread, mysterious, and destructive. They are important and affect many, people often do not know how they are created or what they measure, and finally, they can be unfair towards certain people. The major problem with algorithmic bias is that scientists often do not know they are there when they are created. One example of algorithmic bias in the real world is in job hiring. Some companies use an algorithm to filter applications. Amazon, for example, developed an algorithm that scored resumes and that was discovered to be sexist. If a resume uses the word “execute(ed),” that application would get a higher score than others. On the other hand, if the resume said “women’s” that was a downgrade to the score. This was obviously a destructive and unfair algorithm. O’Neil provided other unique examples of algorithmic bias used in scoring school teachers and used in job hiring. 

The keynote finished by discussing the ethical dilemmas surrounding algorithmic bias and how to prevent them. O’Neil first expressed that if an individual is being scored by an algorithm, they should have access to that score and a reason for why they scored what they did. She also suggested more regulation before an algorithm is deployed or a certain amount of human oversight on these algorithmic systems to catch errors or biases. O’Neil asserted that an ethical framework should be standard practice when creating new algorithms. People need to ask the hard questions like “for whom does this work” and “for whom does this fail.” A conversation involving multiple stakeholders to discuss these issues is a critical part of reforming the algorithms before they are made. 

Visit the event page for more.


  • Algorithms are opinions in code rather than factual information (5:20)
  • Algorithms are widespread, mysterious, and destructive (10:41)
  • Algorithms are often scoring systems used in hiring processes and employee ratings (12:20)
  • There aren’t many regulations on algorithms before they are deployed (34:00)
  • A cross-disciplinary conversation between different stakeholder will help lead us to fairer algorithms and less bias (45:55)

  • “Algorithms are opinions embedded in code” (Cathy O’Neil; 5:20)
  • “Most of the data scientists who build these algorithms don’t even realize that there is bias data in there” (Cathy O’Neil; 20:00)
  • “When we have a score of ourselves that has a high impact on us, like our jobs or our mortgages, we should have access to that score” (Cathy O’Neil; 33:05)
  • “Technology does not exist in a vacuum. It’s socially embedded and used for certain purposes. It’s not always the technologists who are building it who are best situated to  evaluate how that is going to be used or what its effects will be. A cross-disciplinary conversation is critical”  (Mark McKenna; 50:55)