Technology Ethics Conference 2020: Panel 1

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Solon Barocas, Principal Researcher in the New York City lab of Microsoft Research and an Assistant Professor in the Department of Information Science, Cornell University
  • Shaun Barry, Global Leader in the Fraud & Security Intelligence practice, SAS
  • Kevin Bowyer, Shubmehl-Prein Professor of Computer Science and Engineering, University of Notre Dame
  • Genevieve Fried, Researcher, Office of Senator Chris Coons (D-DE) 
  • Sara R. Jordan, Policy Counsel, Artificial Intelligence and Ethics at the Future of Privacy Forum
  • Ronald Metoyer, Associate Professor of Computer Science and Engineering, University of Notre Dame

The first virtual panel of the Technology Ethics Conference featured a discussion surrounding algorithmic bias from five experts with experience in the data science field. The discussion was moderated by Ronald Metoyer, Associate Professor of Computer Science and Engineering at the University of Notre Dame. The five panelists included Solon Barocas, Principal Researcher in the New York City lab of Microsoft Research and an Assistant Professor in the Department of Information Science at Cornell University, Shaun Barry ’91, Global Leader in the Fraud & Security Intelligence practice at SAS, Kevin Bowyer, the Shubmehl-Prein Professor of Computer Science and Engineering, Sara R. Jordan, Policy Counsel, Artificial Intelligence and Ethics at the Future of Privacy Forum, and Genevieve Fried, a fellow serving with Senator Chris Coons (D-DE) and supporting issues of algorithmic accountability and justice. The conference keynote speaker, Cathy O’Neil also offered her insights during the event. The panelists’ discussion focused on discussing and answering three key questions: How does their work fit into algorithmic bias? What does algorithmic bias mean? And finally, Where or in what capacity do algorithms create problems?

Metoyer asked each panelist to introduce themselves by talking about how their work fits into data science and algorithmic bias. Barry kicked off the question by talking about his work at SAS, a data science software company. Barocas and Bowyer both come from an academic background in data science research and professorship. Jordan, on the other hand, works in artificial intelligence and policy, while Fried is involved in policy and justice around algorithmic bias. Each panelist had a unique perspective on data science and algorithmic bias, but they shared some commonalities in their thinking on what algorithmic bias means, what the risk associated with using algorithms are, and how they can be addressed.

Metoyer dove deeper into a conversation about what each panelist thinks algorithmic bias means. A common thread between each panelist’s answer was that algorithmic bias refers to unequal or a difference in accuracy between groups. The conversation took a deep dive into why this could be. Barocas suggested that algorithms could be biased because they are not answering the question the data science intended to answer. Barry offered that there could be bias in the question to begin with, even before the algorithm is made. Another possibility is that the algorithms are using missing or incomplete data. Jordan said even historical training data has bias. If scientists can discover biased data before the algorithm is made and deployed, algorithms will be fairer and more accurate. One solution to this might be having multiple people look at the algorithm before it is put into use.

The last question the panel focused on was how algorithms manifest or create problems, especially bias algorithms. Barry thought that algorithms are most problematic when there is no human oversight or judgement. He believed rather than treating algorithms as deterministic, humans should always make the final decision based on the data. Several of the panelists touched on the importance of understanding and catching the risks associated with using algorithms. Algorithms are largely unregulated, so having an industry-wide conversation on deploying ethical algorithms is necessary to combat these problems.

Visit the event page for more.


  • Algorithms may range from simple to sophisticated systems (5:20)
  • Algorithmic bias means that algorithms have unequal accuracy (27:30)
  • A bias in training data or in the question one wishes to answer will often lead to bias results (31:37)
  • Algorithms are problematic when there is no human oversight on the results of the algorithm (44:20)

  • “Algorithms should not be deterministic, but should be a part of a decision making process that a human being makes” (Shaun Barry; 7:16))
  • “People are coming around to understanding that algorithmic systems are not objective decision makers”  (Genevieve Fried; 18:58)
  • “I don’t think algorithms are inherently evil or that they don’t have a place in society”  (Genevieve Fried; 19:14)