The central argument of the conversation, as framed by Dr. Tom Stapleford, is that we must approach the ethics of artificial intelligence not as a set of rules to follow, but through the lens of virtue ethics. This classical framework defines ethics (ethos) as the study of character—the habitual ways of being, thinking, and acting that define us. Technology, Dr. Stapleford argues, is not a neutral tool; our engagement with it actively shapes our dispositions, cognitive habits, and relationships over time. The most important ethical question, therefore, is not “What should AI be allowed to do?” but rather, “How is this technology forming us as people?”
Driving the urgency of this question are unique economic forces. Unlike previous AI cycles, the current moment is defined by a “radically different” scale of financial investment. This creates an intense corporate incentive to “maximize attention” and ensure constant user engagement to deliver a return on these massive capital expenditures. This corporate goal may not align with the best interests of users, raising the stakes for AI’s formative impact on human character.
This impact manifests as a double-edged sword for human virtues—our qualities of excellence. Dr. Stapleford distinguishes between intellectual virtues (like logical reasoning), character virtues (like courage), and physical virtues (like athletic skill). Interacting with AI can build new skills, such as prompt engineering. However, by outsourcing tasks we once performed ourselves, we risk the degradation of other fundamental abilities. Just as handwriting has atrophied in the age of the keyboard, constant reliance on AI can weaken our capacity for critical thinking, memory, and other cognitive skills that are developed and sustained only through active practice.
Perhaps the most profound challenge comes from the “language paradox” of Large Language Models (LLMs). Language, as Dr. Stapleford explains, is the mode of interaction we are “evolutionarily wired for” among humans. Using our natural language to communicate with machines is both the power and the “great hazard” of generative AI. This creates a difficult double bind in everyday interactions. For example, consciously avoiding polite terms like “please” or “you” can help remind us we are commanding a machine, not conversing with a person. But this risks developing a terse, demanding style that could bleed into real-world relationships. On the other hand, being polite to a machine reinforces good human habits but also deepens the temptation to anthropomorphize it, warping our expectations for the normal friction of human relationships.
Ultimately, the conversation proposes a human-centric path forward, grounding abstract principles in concrete design practices. Dr. Stapleford illustrates this with the real-world example of South Bend exploring the use of an LLM for its 311 call center. A purely efficiency-driven approach would automate the service. But a virtue-based approach begins by asking citizens what they truly value. Is it just getting an answer quickly, or is it the feeling of being heard by a fellow human who works for the city? If that human connection is a key part of the “good” being sought, then the technology should be designed to facilitate it—perhaps by routing calls to the right person—rather than replacing it. This shows that ethics is a context-specific endeavor that requires human debate and judgment to define human excellence, a task that should never be outsourced to a machine.