Trust, Talent, and the AI Colleague

In this episode of The New AI, Accenture Managing Director and Notre Dame alum Jen Hall ’98 joins student hosts for a conversation abput how generative AI is reshaping the workplace. Hall discusses the shift from experimentation to scaled implementation, highlighting key themes like trust, transparency, and workforce readiness. Explore how AI is changing task structures, decision-making, and the expectations placed on new employees. With insights from both industry and academia, uncover how organizations and individuals are adapting to a rapidly evolving work landscape.

The New AI is sponsored on ThinkND by the Technology and Digital Studies Program in the College of Arts & Letters.  This program collaborates with the Computer Science and Engineering Department and other departments around the University to offer the Bachelor of Arts in Computer Science, the Minor in Data Science, and the Idzik Computing & Digital Technologies Minor.

Register to receive emails about future events in the series below.

In this timely episode of The New AI, host Graham Wolf led a compelling, multigenerational conversation about how generative AI is reshaping the workplace. Jen Hall ’98, Managing Director at Accenture, joined student voices Annie Z and Ella Turani to unpack how trust, transparency, and talent readiness are redefining what it means to thrive in an AI-augmented world.

Laying the Groundwork: From Experimentation to Execution

Hall began by reframing the moment—not as an AI revolution, but an evolution. After years of narrow AI operating quietly behind the scenes, generative AI has made its impact visible in daily workflows. But adoption still lags. Hall emphasized that scaling AI responsibly requires more than technical know-how—it demands ethical alignment, cultural buy-in, and deep organizational trust.

Trust Is the New Currency

Trust surfaced as the linchpin of responsible AI integration. Hall highlighted how clients now expect not just faster answers but verifiable, thoughtful ones. That means disclosing when AI is used, governing its output, and ensuring human oversight. Trust, she argued, isn’t a soft value—it’s a structural necessity.

Collaboration, Not Replacement

Ella Turani raised a key tension: Can AI undercut human creativity and judgment? Hall acknowledged the risk, but reframed AI as a jump-start, not a crutch. Used well, generative tools can accelerate progress—but only when paired with expertise, reflection, and an understanding of what the model can and cannot do.

The Shallow Apprenticeship Problem

Annie Z spotlighted a growing issue: early-career professionals often use AI to generate first drafts, leaving senior colleagues to fill in the intellectual depth. Hall warned that while AI boosts productivity, it can hollow out foundational skill-building. Without real-world practice and mentorship, tomorrow’s talent may lack the deep reasoning today’s work requires.

The AI Double Bind in Education

Students expressed ambivalence about AI in the classroom: fluency is rewarded at work, but penalized in school. Hall urged universities to shift toward outcome-based evaluation—training students not just to use AI, but to question, refine, and defend its outputs. This mirrors the ambiguity professionals navigate every day.

Reskilling Over Replacement

On fears of automation, Hall was blunt: the threat isn’t losing your job—it’s becoming irrelevant. While AI will displace repetitive tasks, demand for strategic thinking, creativity, and interpersonal fluency will grow. For students and mid-career workers alike, the mandate is clear: keep learning, or fall behind.

Governing the Ungoverned

In the absence of comprehensive AI regulation, Hall stressed the need for corporate guardrails. Organizations must define ethical boundaries, set disclosure norms, and build accountability into workflows. Far from stifling innovation, governance offers the stability needed for responsible scaling.

Conclusion: Rediscovering the Human Edge

The conversation closed with cautious optimism. Generative AI can democratize access, boost creativity, and push boundaries—but only when paired with integrity, context, and human discernment. As Hall put it: AI won’t replace human talent, but it will expose where our thinking is shallow. The future belongs to those who think deeply, adapt quickly, and collaborate wisely—with both humans and machines.


1. AI as a Teammate, Not a Taskmaster

The episode reframed generative AI not as a tool to blindly delegate to, but as a teammate that can spark better thinking. Rather than outsourcing entire tasks and switching off our judgment, the most successful users treat AI as a collaborator—one that helps break creative blocks, challenge assumptions, and accelerate ideation. The real innovation isn’t automation, it’s augmentation: using AI to stretch our minds, not replace them.

2. Trust Is the True Currency of the AI Era

One of the clearest insights was that in a world full of powerful AI, trust—not speed—is the real differentiator. Whether it’s disclosing AI use to clients or applying AI in sensitive internal processes, organizations must lead with transparency. Without clearly communicated boundaries, even high-performing AI can backfire. Trust, not just output, is now what clients, teams, and users value most.

3. Regulation as a Launch Pad—Not a Leash

Against the grain of common fears, the discussion flipped the narrative around AI governance. Thoughtful regulation, the episode argued, isn’t a constraint—it’s a confidence booster. Clear standards reduce risk, enable responsible experimentation, and build the cultural readiness organizations need to innovate. Guardrails don’t slow down progress—they enable it to scale.

4. The Disappearing Task, Not the Disappearing Job

AI isn’t coming for your job—it’s coming for your routine. The panel emphasized that most roles are bundles of tasks, and AI will increasingly handle the repeatable ones (like data processing or first-draft generation). But that makes the human elements—storytelling, judgment, trust-building—more essential, not less. The key shift is from fearing obsolescence to curating a more valuable skillset.

5. Ethics and Authenticity: The New Interview Game

A sobering moment came in the discussion of hiring. With AI tools increasingly used to ace coding interviews and polish resumes, verifying skill and authenticity is becoming harder. The episode warned that both employers and educators must rethink assessment—possibly moving toward live, interactive formats that reveal how people think, not just what they can output.

6. Education Must Evolve with AI—Not Against It

Rather than banning AI in classrooms, the podcast advocated a smarter path: integration. Students must learn how to engage AI responsibly—critiquing its limitations, refining its outputs, and understanding when to lean in or step back. Instructors should reward process and reasoning, not just results. Education systems that adapt to AI won’t just survive—they’ll better prepare students for the world they’re entering.


  1. “Responsible AI is definitely a trend we’re seeing a lot of, and we talked about it less as a tool and more as a partner, something that you use in a way that helps to enhance and accelerate your productivity, but not necessarily something that you send something off to and expect it to come back perfect.”
    — Jen Hall [00:04:28 → 00:04:46] 
  2. “How can companies teach their employees to think of AI as a teammate as opposed to something that you send your work to and then get back and then turn in?”
    — Ella Turani [00:09:11 → 00:09:18] 
  3. “You have to be fully transparent on where you’re using generative AI in your process and where you’re not.”
    — Jen Hall [00:16:03 → 00:16:09] 
  4. “AI can only go so far. It’s our human judgment about when to use it and when not to, and how to use it responsibly, that really matters.”
    — Jen Hall [00:17:33 → 00:17:42] 
  5. “The most critical skill of the future is going to be maintaining relevancy and understanding and applying emerging technologies.”
    — Jen Hall [00:22:45 → 00:22:53] 
  6. “The real danger isn’t job loss—it’s relevance loss.”
    — Jen Hall [00:30:58 → 00:31:01]

Health and SocietyScience and TechnologyArtificial IntelligenceDigest152Digest157Digest207Generative AITechnology and Digital Studies ProgramUniversity of Notre Dame

More Like This

Related Posts

Let your curiosity roam! If you enjoyed the insights here, we think you might enjoy discovering the following publications.

Stay In Touch

Subscribe to our Newsletter


To receive the latest and featured content published to ThinkND, please provide your name and email. Free and open to all.

Name
This field is hidden when viewing the form
This field is hidden when viewing the form
What interests you?
Select your topics, and we'll curate relevant updates for your inbox.
Affiliation