Finding Virtue in the Generative Revolution

Generative AI offers incredible power, but how does it shape our human character? Tom Stapleford, associate professor in Notre Dame’s Program of Liberal Studies, applies the timeless wisdom of virtue ethics to the generative revolution, exploring the moral consequences of a technology that is not just a tool, but a powerful, habit-forming force.


The New AI is sponsored on ThinkND by the Technology and Digital Studies Program in the College of Arts & Letters.  This program collaborates with the Computer Science and Engineering Department and other departments around the University to offer the Bachelor of Arts in Computer Science, the Minor in Data Science, and the Idzik Computing & Digital Technologies Minor.

The central argument of the conversation, as framed by Dr. Tom Stapleford, is that we must approach the ethics of artificial intelligence not as a set of rules to follow, but through the lens of virtue ethics. This classical framework defines ethics (ethos) as the study of character—the habitual ways of being, thinking, and acting that define us. Technology, Dr. Stapleford argues, is not a neutral tool; our engagement with it actively shapes our dispositions, cognitive habits, and relationships over time. The most important ethical question, therefore, is not “What should AI be allowed to do?” but rather, “How is this technology forming us as people?”
Driving the urgency of this question are unique economic forces. Unlike previous AI cycles, the current moment is defined by a “radically different” scale of financial investment. This creates an intense corporate incentive to “maximize attention” and ensure constant user engagement to deliver a return on these massive capital expenditures. This corporate goal may not align with the best interests of users, raising the stakes for AI’s formative impact on human character.
This impact manifests as a double-edged sword for human virtues—our qualities of excellence. Dr. Stapleford distinguishes between intellectual virtues (like logical reasoning), character virtues (like courage), and physical virtues (like athletic skill). Interacting with AI can build new skills, such as prompt engineering. However, by outsourcing tasks we once performed ourselves, we risk the degradation of other fundamental abilities. Just as handwriting has atrophied in the age of the keyboard, constant reliance on AI can weaken our capacity for critical thinking, memory, and other cognitive skills that are developed and sustained only through active practice.
Perhaps the most profound challenge comes from the “language paradox” of Large Language Models (LLMs). Language, as Dr. Stapleford explains, is the mode of interaction we are “evolutionarily wired for” among humans. Using our natural language to communicate with machines is both the power and the “great hazard” of generative AI. This creates a difficult double bind in everyday interactions. For example, consciously avoiding polite terms like “please” or “you” can help remind us we are commanding a machine, not conversing with a person. But this risks developing a terse, demanding style that could bleed into real-world relationships. On the other hand, being polite to a machine reinforces good human habits but also deepens the temptation to anthropomorphize it, warping our expectations for the normal friction of human relationships.
Ultimately, the conversation proposes a human-centric path forward, grounding abstract principles in concrete design practices. Dr. Stapleford illustrates this with the real-world example of South Bend exploring the use of an LLM for its 311 call center. A purely efficiency-driven approach would automate the service. But a virtue-based approach begins by asking citizens what they truly value. Is it just getting an answer quickly, or is it the feeling of being heard by a fellow human who works for the city? If that human connection is a key part of the “good” being sought, then the technology should be designed to facilitate it—perhaps by routing calls to the right person—rather than replacing it. This shows that ethics is a context-specific endeavor that requires human debate and judgment to define human excellence, a task that should never be outsourced to a machine.

• Unprecedented Financial Incentives Create New Risks The massive financial investment in today’s AI is historically unique. This creates intense corporate pressure to maximize user engagement to ensure profitability, a goal that may not align with the long-term well-being and character development of the user.
• Virtue Ethics Is a Powerful Lens for AI Rather than focusing on a simple list of dos and don’ts, this ethical framework examines how technology actively shapes our character. It forces us to ask what habits, dispositions, and cognitive skills are being strengthened or weakened through our daily interactions with AI.
• AI Interaction Is a Double-Edged Sword for Skills While using AI can build new capabilities like prompt engineering, it can also lead to the degradation of fundamental human skills. Virtues, whether intellectual or physical, are maintained through practice, and outsourcing tasks to AI means we are no longer practicing them ourselves.
• Language-Based AI Poses a Unique Human Challenge We are evolutionarily wired to associate language with personhood. Interacting with LLMs through language is a “great hazard” because it tempts us to treat machines like humans, which can distort our expectations for real-world relationships and blur critical distinctions.
• Defining “Good” Must Remain a Human Task The core of ethics involves determining the standards for human excellence and virtue. This is a profound question for human debate, lived experience, and societal discussion. Outsourcing this fundamental task to an algorithm is, as Dr. Stapleford argues, a “huge mistake.”

  • “the sheer scale of investment in AI today… is not comparable to anything we’ve seen around AI in the past… the financial investment is just radically different this time…” — Dr. Tom Stapleford
  • “they have a major financial incentive to get AI to get people using AI as much as possible in as many ways as possible… the incentives for those companies may or may not be aligned with what I would think of as the best interests of of users at large.” — Dr. Tom Stapleford
  • “how does our use of technology begin to shape our dispositions our patterns of thinking our patterns of acting the ways in which we relate to one another the kind of habits that that we build in ourselves.” — Dr. Tom Stapleford
  • “…what’s so striking about LLMs is that our exactly as you said our mode of interaction with them is through language and that’s something that we are evolutionary wired for… now you’ve got this problem or puzzle or new challenge that is both the power of these generative AI systems and also their great hazard.” — Dr. Tom Stapleford
  • “…what that standard is is a question for humans to be debating discussing deciding deciding about. So to outsource that question to a machine… is a huge mistake. So even asking you know kind of posing that question into an LLM shows that you sort of misunderstood the enterprise of of ethics from the get-go…” — Dr. Tom Stapleford

Art and HistoryHealth and SocietyScience and TechnologyArtificial IntelligenceDigest152Digest157Digest207Generative AITechnology and Digital Studies ProgramUniversity of Notre Dame

More Like This

Related Posts

Let your curiosity roam! If you enjoyed the insights here, we think you might enjoy discovering the following publications.

Stay In Touch

Subscribe to our Newsletter


To receive the latest and featured content published to ThinkND, please provide your name and email. Free and open to all.

Name
This field is hidden when viewing the form
This field is hidden when viewing the form
What interests you?
Select your topics, and we'll curate relevant updates for your inbox.
Affiliation