TEC Talks: Artificial Intelligence and Power

Subscribe to the ThinkND podcast on Apple, Spotify, or Google.

Featured Speakers: 

  • Kirsten Martin, Director, Notre Dame Technology Ethics Center; William P. and Hazel B. White Center Professor of Technology Ethics; Professor of IT, Analytics, and Operations
  • Elizabeth M. Renieris, Professor of the Practice; Founding Director, Notre Dame-IBM Technology Ethics Lab
  • Luke Stark, Assistant Professor, Faculty of Information and Media Studies,  University of Western Ontario

On September 13, 2021, the Notre Dame-IBM Technology Ethics Lab and Notre Dame Technology Ethics Center (ND-TEC) hosted the first session of the fall TEC Talks series. This semester, the series focuses on the intersection of technology and power. The guest for the first session was Luke Stark, an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario, in conversation with Kirsten Martin, Director of the Notre Dame Technology Ethics Center.

In a 2019 article, Stark compared facial recognition technology to the toxic radioactive element plutonium and called for an outright ban, explaining that minorities have “suffered the brunt of facial recognition technologies.” In exploring the connections between digital technologies and classification and other social issues, he argues facial recognition technologies are intrinsically designed to classify and group human likenesses, making them by extension tools of “racial categorization and oppression,” as it calls back to antiquated theories about different races. Despite the best intentions of the developers of these technologies, Stark noted that it will always produce “racialized hierarchies and all the problems that come along with it.” An understanding of the history of technology and sciences can be useful in understanding some of the contemporary questions and problems raised by these emerging technologies.
The conversation then moved on to emotional recognition technologies. Emotional recognition technologies are another attempt of AI and facial recognition researchers to use emerging technologies to attempt to solve some of the complexities behind human emotion. Stark explained that emotions involve many different facets of biological and physiological functions, and have been of great interest for scientists who, for over a century, have wished to create a quantification or prediction of emotion. However, Stark noted that there has been an increasing gap between the goals of contemporary scientists and the way their work is presented to the public regarding what emotion recognition technology can accomplish. As normative and ethical judgement are the main ways in which we think about emotion, a reliance on these technologies may involve extending their use to, for example, infer guilt from a crime suspect who is identified by emotion recognition software as wearing a guilty expression.
Martin briefly discussed the responsibility of business to acknowledge the limitations of such technologies, and ways to hold them accountable for embellished claims of effectiveness. Public trust in corporations to abide by their stated limitations on the applications of facial recognition has been a key force and market mechanism primarily through the labor force and internal pressure from employees. Due to large shifts in the power dynamic in the labor market, many AI firms have been forced to alter their business models as the automated systems originally relied heavily on a weak labor force. Public pressure has been shown to be effective recently at holding these corporations to their statements regarding facial recognition. Despite this, there seems to be a general lack of regulatory knowledge about the functionality and what is known by academics and researchers about these developing facial recognition technologies.
Martin noted that many of these technologies, while not sold as standalone products, have been increasingly incorporated into other product offerings (for example, hiring software that purports to infer the personality of an interviewee from their facial expressions). This leads to the subjects of the AI programming, not the companies, bearing all of the costs and burden associated with participating in the system, contrary to typical economic and business rationale.
Stark and Martin noted that this issue is compounded by how research funding has traditionally been allocated in the computer science field. The dependency of researchers on corporate entities has led to a constriction of the ability to fully study the consequences or limitations of these new technologies in favor of examining their commercial applications.

Visit the event page for more.


  • Facial recognition necessarily seeks to classify the human body and the human face, and will always produce racialized hierarchies that disadvantage indigenous people and people of color. (Luke Stark, 6:27)
  • Public pressure and negative reputational impact may influence companies to stop investing in technologies such as facial recognition. (Kirsten Martin, 9:15)
  • Robust testing of emerging technologies that could cause harm or perpetuate discrimination, such as facial recognition or emotion recognition technologies, could be a way to hold companies accountable for the impact of these technologies. (Kirsten Martin, 24:12)
  • Impact of emerging technologies on the community will always be more important than the intention behind developing the technology. A positive intention won’t outweigh a negative impact. (Luke Stark, 44:07)

  • “The way we understand what emotions are, what evidence we use to claim we’ve detected an emotion really matters because as anybody on this call will realize, emotions are kind of complicated and multifaceted. … Sometimes they involve expressions of the face, they involve feelings, they involve internal cognition, they involve reflecting on how we’re feeling, our heart rate, all sorts of things. And so I think that, at a broad level, there’s been a strong desire on the part of not just computer scientists but also physiologists, you know, medical people before emotion, before computing was even developed, to try to find that one proxy that explains emotion, that can stand in for emotion in a scientific context.” (Luke Stark, 15:30)
  • “There just needs to be a better understanding of some sort of baseline below which, if you don’t do that type of testing of you programs then you can’t sell it. … And we do this all the time with other products. We don’t normally build bridges and say ‘I’m not sure if this works, let’s just figure this out.’ We don’t build cars and say, ‘I think it will be OK for a while, you know, test it on the highway and see what happens.’”   (Kirsten Martin, 24:12) 
  • “The ‘moral crumple zone’ is pushed onto workers; it’s always pushed onto the people, not onto the firms who are using these systems, and it’s on the developers. And this, I think, ties into a broader question about labor, the broader conditions of the economy have to be such that workers feel like they have little choice, and so what all of these automated systems actually rely on is a weak labor force.” (Luke Stark, 31:42)
  • “I think we just need to normalize the last part of every paper that talks about limitations and critiques of my own work, like in other areas, that’s normal where you talk about your limitations and you kind of own them. There just needs to be more of a kind of self-critiquing, like ‘I just put forward this model, how could it be damaging and what are my limitations.’” (Kirsten Martin, 42:36)

Technology and PowerTechnology Ethics CenterUniversity of Notre Dame