AI Ethics by Design

Gain a strategic edge in the evolving AI landscape. Join Vatican AI consultant Father Paulo Benanti and CMU machine learning expert Professor Arti Singh for a high-level dialogue on innovation and human dignity. This fireside chat bridges ethics and engineering, offering essential insights for leaders committed to building responsible, human-centric technology.

The world is in a transformative era, with AI revolutionizing industries, reshaping innovation, and unlocking opportunities once thought impossible. Its vast potential inspires optimism for a future where technology drives progress across industries, governments, NGOs, and society as a whole. However, with this promise comes significant responsibility. Concerns over bias, inequitable access, safety vulnerabilities, and ethical uncertainties highlight the urgent need for a guiding framework. RISE (Responsible, Inclusive, Safe and Ethical) AI fulfills this role, ensuring that AI technologies are developed and applied responsibly, inclusively, and ethically.

The RISE AI Conference provides a unique platform to explore how artificial intelligence can be harnessed to tackle complex societal and contemporary challenges while upholding the principles of RISE. The inaugural RISE AI Conference took place from October 6-8, 2025 at the University of Notre Dame, and was hosted by the Lucy Family Institute for Data and Society.

For more information, please visit the RISE AI Conference website.

The R.I.S.E. AI Conference serves as a critical strategic junction, fusing Vatican-led ethical frameworks with Carnegie Mellon’s world-leading engineering. In an era where efficiency often eclipses integrity, this dialogue addresses the imperative of centering human dignity. By bridging theological and technical perspectives, the conference moves beyond simple innovation. It provides a roadmap for ensuring that AI deployments actively drive integral human development, transforming raw technological power into a tool that serves collective progress rather than just mechanical optimization.
Father Benanti rejects the “medieval” dichotomy of technology as inherently good or evil, arguing instead that AI acts as a mechanism for social order and power displacement. Just as a railway station dictates access to opportunity, algorithmic “if-this-then-that” logic reshapes societal power dynamics. True “AI for Good” is the transformative element that evolves innovation into “integral human development.” This requires a recursive, ongoing process of questioning how technological choices impact human rights and dignity. Rather than a static label, ethics in the digital age is a constant negotiation of power. Organizations that recognize this transition—from viewing AI as a neutral object to a social architect—position themselves to lead with integrity in an increasingly complex global marketplace.
Professor Arti Singh demonstrates the “Developer-Deployer Continuum” through her work in disaster management, such as assessing building damage via drone footage post-hurricane. She emphasizes that AI tools must be “co-discovered” with stakeholders like the Red Cross rather than simply “delivered.” This collaborative approach ensures that technology addresses specific emergency needs without replacing human judgment. In high-stakes environments where seconds matter, AI serves as an informational aid that allows responders to act quickly while maintaining oversight. By involving stakeholders throughout the entire development pipeline, researchers ensure that AI remains a decision-support tool tailored to real-world contexts. This mandate for co-discovery is essential for building trust and ensuring that technological solutions are effective, ethical, and operationally viable in crisis scenarios.
Reliability remains a central challenge, particularly regarding the “99% error-free” threshold. Father Benanti notes that while 99% sounds high, the cumulative risk across a 2,000-step process—such as 33 hours of continuous coding—makes human supervision mandatory within “human-compatible time.” This reality necessitates “constitutional guardrails,” especially for “Software-Defined Entities.” Using the example of cars receiving over-the-air updates, Benanti illustrates how software can fundamentally alter a product’s nature overnight, turning a vehicle into a potential security risk. This “software fluidity” creates a massive regulatory bottleneck and a strategic risk for product liability. Traditional legislative models are insufficient; we must develop frameworks that account for objects whose essence shifts with a single update.
Navigating these complexities requires a “virtuous cycle” of interdisciplinary research. As Moderator Nitesh Chawla observed, the university is the ideal laboratory for training future leaders. By scaling AI literacy through inclusive models that “train the trainers,” we can ensure technology remains a servant to human development and collective societal progress.

These insights represent the current frontier of AI policy and implementation, offering a strategic advantage for organizations aiming to lead in the responsible tech landscape.
  • AI as Social Order: Algorithmic choices act as power displacements, dictating societal access and defining what is prioritized or excluded.
  • The Co-Discovery Mandate: Stakeholder engagement throughout the entire pipeline is non-negotiable; AI must be developed alongside those who deploy it to address actual human needs.
  • Ethical Triage: Adapting bioethics frameworks to AI allows for systematic risk assessment, particularly when serving “non-competent” clients or operating in high-stakes environments.
  • Software Fluidity vs. Legislative Stability: The emergence of “Software-Defined Entities” requires new constitutional guardrails to regulate objects that change their nature via over-the-air updates.
  • Inclusive Data Pedagogy: The “Train the Trainer” model scales global AI literacy, allowing local educators to co-design content—such as the classification of Navajo jewelry—that reflects community-specific data and cultural values.
These takeaways redefine the “Responsible AI” landscape as a dynamic, stakeholder-driven process focused on human flourishing and long-term strategic resilience.

  • “AI for good is the element that transform innovation in human development.” — Father Paulo Benanti
  • “Developing AI for social good… really requires I would say developer and deployer together working with the stakeholders throughout the entire pipeline.” — Professor Arti Singh
  • “What people say, what people do, and what people say they do are entirely different things… we all hallucinate.” — Moderator Nitesh Chawla
  • “In which way you will regulate an object that can change the nature with a software upgrade? And that is something that is struggling on society.” — Father Paulo Benanti
  • “What is a piece of data? It’s something from reality that you simple make a judgment distinguish from noise… Every time that we talk about data… we decided something else is noise.” — Father Paulo Benanti

Health and SocietyScience and TechnologyDigest157Lucy Family Institute for Data & SocietyUniversity of Notre DameArtificial Intelligence

More Like This

Related Posts

Let your curiosity roam! If you enjoyed the insights here, we think you might enjoy discovering the following publications.

Stay In Touch

Subscribe to our Newsletter


To receive the latest and featured content published to ThinkND, please provide your name and email. Free and open to all.

Name
This field is hidden when viewing the form
This field is hidden when viewing the form
What interests you?
Select your topics, and we'll curate relevant updates for your inbox.
Affiliation