TEC Talks: Machine Learning and Power

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Abeba Birhane, PhD Candidate in Cognitive Science, University College Dublin
  • Noopur Raval, Postdoctoral Researcher, AI Now Institute

On October 11, 2021, the Notre Dame-IBM Technology Ethics Lab and Notre Dame Technology Ethics Center (ND-TEC) hosted the third session of the fall TEC Talks series. This semester, the series focuses on the intersection of technology and power. The guests for the second session were Abeba Birhane, a Ph.D. candidate in Cognitive Science at University College Dublin, and Noopur Raval, a postdoctoral researcher at the AI Now Institute at New York University. The discussion focused on the topics of machine learning and its relationship to efficiency, human rights, and international perceptions.

The session began with a broad question directed to Birhane: What is the difference between artificial intelligence (AI) and machine learning? Birhane explained that it’s almost impossible to define AI, but that one can think of machine learning as a subfield of AI. She noted a common misconception of both AI and machine learning: since they operate through sophisticated mathematical models, many people think that AI and machine learning are objective, neutral decision-making tools, which is not the case.

Raval noted that many datasets used to train AI models come from crowd-work platforms such as Amazon’s Mechanical Turk or offshore third-party vendors – “ghost workers.” She highlighted a “back-and-forth” exercise and a contrast between ghost workers who clean and label datasets, the computer scientists who develop machine learning models and run research labs, and students and junior staff who collaborate with them. Birhane noted that another misconception about AI is that people tend to think of it as fully autonomous; however, Raval’s point about ghost workers highlights that AI is “just people, through and through.” She noted that these ghost workers are rarely thought of as stakeholders in the AI development process, despite their critical importance.

The conversation turned to the meaning of the term, or trope, “ghost work” itself, and Raval noted that the term itself has a connotation of “low tech,” undignified, or exploitative work tied closely to Global North notions of who performs it – for example, third-party contractors in India. She also noted that the “pure” technologies, such as facial recognition software, are often cloaked in different terms depending on whom they’re being presented and sold to when they are translated into a local language, such as Tamil. For example, the deployment of facial recognition technology in schools are addressed with terms that highlight attendance and positive performance in school environments, while their use by law enforcement agencies are addressed with terms that emphasize surveillance.

The terms used for technologies are closely tied to the values that underpin them; as such, the conversation turned to a study that Birhane and her team performed, which evaluated over 100 research papers submitted to top AI conferences and analyzed the values that the papers highlighted. Birhane noted that the top values highlighted in the papers were accuracy, efficiency, and performance, and that values such as privacy, fairness, and mitigating bias were rarely mentioned.

Birhane also noted that prioritizing scale and growth over values such as fairness and privacy can be damaging. Another study performed by her team found that a dataset of over 400 million images from the internet, which were used to train machine learning models, included heavily stereotyped, offensive, racist, and misogynistic images for terms such as “beautiful,” but did not include offensive images for terms such as “CEO” or “boss.” Birhane and Raval agreed that there is a perception that everything will “balance out” or trend towards being more democratic as scale increases, but that that’s not the case: the internet is built on economies of leisure and time, in environments where internet access is steady, which excludes much of the world.

The conversation closed with a brief discussion of regulation, and how developing governance models may impact machine learning in the future.

Visit the event page for more.


Artificial intelligence and machine learning are not neutral decision-making tools – they reflect the values and cultures of the people who develop them.

Artificial intelligence and machine learning are far from fully autonomous – they rely on workers to find, clean, and label the data used to train them.

Prioritizing scale and growth over values such as fairness and privacy can lead to negative outcomes, like machine learning datasets that include offensive and hateful imagery and labels.


“People often associate AI with stats or mathematics, and people often think whatever your model does or whatever your data set it just represents reality in a neutral and objective way.” Abeba Birhane, 5:09

“Democratizing AI has, in itself, a huge number of challenges, because the kind of computing resources needed to compete in the commercial AI space are available to very few corporations or research labs, and so those corporations and research labs that are also engaged in a sort of intellectual property race and war.” Noopur Raval, 10:56

“The Internet is really not just reality, it’s just a portion of reality, as seen by a really specific perspective, from a certain political, cultural and personal background and view.” Abeba Birhane, 36:58

“It definitely also ties back to an enduring problem with the Internet in terms of how traditionally the knowledge layers on which the cultural memory of the Internet that we seem to be drawing on and building on is already so flawed and has has relied on or has sort of been representative of where Internet access has been more prevalent, who’s had also economies of leisure and economies of time. … We talk about how sexist and white and privileged the internet is, in itself, but I feel like it’s, not just because of a certain kind of participation or bodies that participate, but it’s a historical kind of economy of time and leisure.” Noopur Raval, 39:30


Technology Ethics CenterUniversity of Notre Dame