Technology Ethics Conference 2020: Keynote
Subscribe to the ThinkND podcast on Apple, Spotify, or Google.
Featured Speakers:
- Mark P. McKenna, John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center, University of Notre Dame
- Cathy OāNeil, Author of the New York Times bestselling book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
The first virtual event in the Technology Ethics Conference and last virtual event in the Numbers Can Lie series was the keynote presentation given by Cathy OāNeil, author of the New York Times bestselling āWeapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.ā The event was moderated by Mark McKenna, the John P. Murphy Foundation Professor of Law at the Notre Dame Law School and the Director of the Notre Dame Technology Ethics Center. The keynote highlighted OāNeilās research and work related to data science and algorithmic bias. OāNeilās address focused on what algorithms are and how to understand them, how algorithms are used in the real world, and finished with what needs to be done to address ethical issues in algorithmic bias and what the future might hold with this technology.Ā
In the first part of OāNeilās keynote address, she took the time to help clarify what she means when she talks about algorithmic bias. The definition she provided is that bias is opinions embedded in code rather than scientific facts. Because they are opinionated, they often misdirect and can become disruptive. OāNeil asserted that people use predictive algorithms every day. For example, when one chooses what to wear in the morning, they have a bias and an agenda as to what they choose to wear and why. They know from previous experience what looks good on them for what occasion. This agenda will differ from someone else’s and each person will have a different definition of success after they get dressed.Ā
OāNeil believes that algorithmic biases are typically widespread, mysterious, and destructive. They are important and affect many, people often do not know how they are created or what they measure, and finally, they can be unfair towards certain people. The major problem with algorithmic bias is that scientists often do not know they are there when they are created. One example of algorithmic bias in the real world is in job hiring. Some companies use an algorithm to filter applications. Amazon, for example, developed an algorithm that scored resumes and that was discovered to be sexist. If a resume uses the word āexecute(ed),ā that application would get a higher score than others. On the other hand, if the resume said āwomenāsā that was a downgrade to the score. This was obviously a destructive and unfair algorithm. OāNeil provided other unique examples of algorithmic bias used in scoring school teachers and used in job hiring.Ā
The keynote finished by discussing the ethical dilemmas surrounding algorithmic bias and how to prevent them. OāNeil first expressed that if an individual is being scored by an algorithm, they should have access to that score and a reason for why they scored what they did. She also suggested more regulation before an algorithm is deployed or a certain amount of human oversight on these algorithmic systems to catch errors or biases. OāNeil asserted that an ethical framework should be standard practice when creating new algorithms. People need to ask the hard questions like āfor whom does this workā and āfor whom does this fail.ā A conversation involving multiple stakeholders to discuss these issues is a critical part of reforming the algorithms before they are made.Ā
- Algorithms are opinions in code rather than factual information (5:20)
- Algorithms are widespread, mysterious, and destructive (10:41)
- Algorithms are often scoring systems used in hiring processes and employee ratings (12:20)
- There arenāt many regulations on algorithms before they are deployed (34:00)
- A cross-disciplinary conversation between different stakeholder will help lead us to fairer algorithms and less bias (45:55)
- āAlgorithms are opinions embedded in codeā (Cathy OāNeil; 5:20)
- āMost of the data scientists who build these algorithms donāt even realize that there is bias data in thereā (Cathy OāNeil; 20:00)
- āWhen we have a score of ourselves that has a high impact on us, like our jobs or our mortgages, we should have access to that scoreā (Cathy OāNeil; 33:05)
- āTechnology does not exist in a vacuum. It’s socially embedded and used for certain purposes. It’s not always the technologists who are building it who are best situated toĀ evaluate how that is going to be used or what its effects will be. A cross-disciplinary conversation is critical”Ā (Mark McKenna; 50:55)
Related Content
Hesburgh Library celebrates 60 years with 60 milestones
Nearly 60 years ago, President Emeritus Rev. Theodore M. Hesburgh, C.S.C., dreamed of a new library building that would become the academic heart of the University. It would serve...
View EventThe Eucharistic Sacrifice and the Mission to the Poor
In 2022, the United States Conference of Catholic Bishops (USCCB) announced that the Church in this country would undertake a Eucharistic Revival, as a way to bolster Catholicsā...
View EventShakespeare and Possibility: Hamlet 50/50
In August 2023, the Notre Dame Shakespeare Festival premiered Hamlet 50/50, a new adaptation of Hamlet which seeks to advance gender equity in the workplace of Shakespeare...
View Event