AI for Social Good: How Do We Get There?

Join us for an inspiring virtual event, “AI for Social Good: How Do We Get There?” featuring distinguished speaker Ahmed Abbasi, Joe and Jane Giovanini Professor of IT, Analytics, and Operations at Notre Dame’s Mendoza College of Business. Professor Abbasi will illuminate pathways to leverage AI for societal benefit, addressing ethical considerations, responsible deployment, and impactful applications in areas such as healthcare, education, and sustainability. Gain valuable insights into harnessing the transformative potential of AI to address pressing social challenges. Don’t miss this opportunity to explore how we can collectively advance towards a future where AI serves the greater good.***

*** This description was written by ChatGPT using the prompt: “Write a 100 word description of a virtual event titled “AI for Social Good: How Do We Get There?” featuring speaker Ahmed Abbasi, Joe and Jane Giovanini Professor of IT, Analytics, and Operations in the Mendoza College of Business at Notre Dame.”

In the third of eight lectures in the Ten Years Hence Speaker Series, esteemed guest Professor Ahmed Abbasi, Joe and Jane Giovanini Professor of IT, Analytics, and Operations at Notre Dame’s Mendoza College of Business, leads a thoughtful foray into the intertwined worlds of artificial intelligence (AI) and social welfare. Introduced by Professor James S. O’Rourke, Abbasi presented a lecture titled “AI for Social Good – How Do We Get There?”, offering listeners a profound analysis of AI’s trajectory and its potential to serve the common good.

The dialogue commenced with a historical overview, highlighting the technological milestones from 2007-2008 that set the stage for the present-day data explosion. With smartphones and an increasing reliance on digital platforms, such as Wikipedia and social media, the data generation reached unprecedented levels. This surge has been instrumental in facilitating the growth of language models like GPT-4, leading the charge with over 1.75 trillion parameters. However, such remarkable advancements carry a caveat; as Abbasi remarked, there is a disproportionate concentration of research focused on optimization rather than leveraging AI’s potential for social good. This observation spotlighted the urgency for a “common good framework,” a directive promoting the accessibility of public datasets, success metrics, and safeguards against the inadvertent misuse of data science.

The event moved into a critical examination of the inherent biases and the detrimental “publish or perish” culture pervading academia which, as Abbasi and O’Rourke concurred, stymie the progress towards research that serves the public interest. Illustrations of this point were the initiatives like ClimateBench and AI Fairness 360, both cited as exemplary frameworks potentially guiding research towards more tangible benefits for societal welfare.

At the heart of the discussion was the what, why, and the how of propelling AI into a force for societal good, with Abbasi’s academic and professional background providing a sturdy platform for his insights. His address swept across four key areas: the pivotal moment for AI credibility; the guiding principles for AI application in society; defining both the destination and the starting point for this journey; and a proposed roadmap for beneficial AI evolution.

As the narrative unfolded, listeners were taken on a journey through the AI landscape, from its digital dominion over language and visuals to the remarkable adoption rate following the release of ChatGPT in 2022. The discussion took a light-hearted twist when Prof. Abbasi humorously considered the prospect of admitting robots to MBA programs – an anecdote that brought a moment of levity to an otherwise earnest conversation.

The episode then delved into the contentious issues of AI and intellectual property ownership, as well as the moral obligations of internet access as a driver for education and equitable opportunity. As the corporate landscape vies for shareholder approval, the professors noted a palpable tension between profit motifs and commitments to Environmental, Social, and Governance (ESG) endeavors.

Abbasi provided a real-world application of AI for social good in an AI app he is developing to address violence in South Bend. This solution aims to connect at-risk individuals with necessary resources and provide AI-driven therapeutic assistance. This marked a critical juncture in the discussion, juxtaposing the transformative potential of AI against the looming specter of ethical ambiguity, particularly in predictive applications.

The episode crystallized around an engaging exchange when Professor O’Rourke prompted a discourse on balancing AI’s convenience with ethical imperatives. This segued into the broader theme of innovating responsibly, and the inherent challenges of marrying these two objectives in a rapidly advancing technological domain.

Abbasi painted a statistical panorama of AI’s academic landscape, demonstrating its prolific growth counterbalanced by a concerning decline in the intersection with social good research. This graphical revelation underscored the stark reality of the current trajectory away from endeavors that might materially benefit society.

Further, the professors highlighted innovative uses of AI in social studies, such as Yelp’s introduction of features in 2020 that enabled business owners to self-identify race and gender. This brought about research on bias in economic contexts, manifesting a shift towards research that captured intersectional issues in society.

Mental health was underscored as a domain conspicuously overrun by diagnostic research within AI, though interventions and pragmatic solutions lagged behind. This depicted a research landscape tilted towards identification over action – a gap the professors urged to be closed through non-industry funded programmatic research.

Language models emerged as informational lighthouses, illuminating societal perspectives on poverty, gender, and ethnicity – but not without the shadow of widespread human biases influencing these digital constructs. Notre Dame’s Humanitarian Operations Management Lab (HOPE) serves as a beacon in this field, endeavoring to ameliorate water access issues in sub-Saharan Africa through the lens of AI.

Closing the discussion, Abbasi articulated a clear vision for the future of AI – one guided by an ethical compass and invested in alleviating societal disparities. A future where technology, ethics, and social justice converge to navigate an increasingly complex and interconnected world.


  1. Data Explosion’s Double-Edged Sword: The proliferation of data since the smartphone revolution has been a crucial inflection point. Yet, it’s a stark reminder that while we’re creating this wealth of information, we must steer its application towards societal benefit rather than mere technological refinement. [00:22:33] 
  2. Ownership and Ethics in AI’s Shadow: The queries about who owns the intellectual property created by AI touch on a broader theme of ethics in tech is a call to remain vigilant about the ethical implications of our digital footprints, considering the fine line between creative assistance and creative ownership in the content we create or consume. [01:05:00] 
  3. Common Good Over Individual Success: The concept of shifting from a Common Task Framework to a Common Good Framework underscores the necessity to look beyond individual accolades in academia. [00:27:41] 
  4. Unintended Consequences of Innovation: Discussions around AI’s predictions of violence and ethical implications offer insights into the weight of accountability in technological advancements. As we harness the power of predictive analytics in healthcare, safety, or marketing, it becomes critical to approach these tools with caution, ensuring they serve to better humanity, not undermine it. [01:13:00]

  • Sentience and Artificial General Intelligence: “Whether or not you need to be sentient to have general intelligence, first of all, is probably a better debate to have; if you think about living objects, fish are generally not viewed as being sentient.”
    Professor Ahmed Abbasi [01:06:25→01:06:38] 
  • Artificial Intelligence Evolution: “What’s changed with AI is that because the pace of change is so fast, the innovation is moving at light speed.”
    Professor Ahmed Abbasi [01:14:50→01:14:57]

 

  • Positive Play in Gaming: “All the research shows gaming can be actually beneficial in certain quantities, but right now, for some people, it’s really hard to do it because the culture is not inclusive. For example, for female gamers, how do you create a more female-friendly gaming environment? It’s called positive play.”
    Professor Ahmed Abbasi [00:53:24→00:53:43]

 

  • Influence of Industry on Research: “So we’re very much at the mercy of industry in terms of what we study and how we study it. Why don’t we do field data collections and have public data sets on these topics, where we can vet them and make sure that they aren’t biased by the companies?”
    Professor Ahmed Abbasi [00:44:57→00:45:09]

 

  • The Future of AI and Privacy: “If you think about the principles of innovation and the principles of precaution, it’s essential that they go hand in hand.”
    Professor Ahmed Abbasi [01:14:09→01:14:15]

 

  • Investment Wisdom in a Saturated Market: “All the money in the market was chasing a dozen or fifteen or a thousand bad ideas. So he said, our task really is not to find the money. You will always have the money. Our task is to find good ideas.”
    Professor James S. O’Rourke [00:59:11→00:59:25]

 

  • Ownership of AI Creations: “The question was, who owns intellectual property written by a robot or an algorithm? And there were two views of this. The intellectual property and copyright lawyers all said, ‘Are you nuts? Inanimate objects can’t own anything that copyright and intellectual property ownership is for human beings, and that’s the way it’s always going to be.’ But some others, including both engineers and philosophers, said ‘What about the moment AI becomes sentient and has a personality’ and of course the theologians are asking if it has a soul. More appropriately, does it get the revenues from the sale of its own creation? The people who programmed it didn’t write the program that made the money. Something clever created by the AI algorithm did that on its own.'”
    Professor James S. O’Rourke [01:05:00→01:05:47]

 

  • Access to AI as a Common Good: “Increasingly, access to the internet is seen as essential rather than a luxury.”
    Professor James S. O’Rourke [01:02:45→01:02:51]

 

  • The Social Good Paradox: “There is this social good paradox. We are all part of this behavioral modification engine machine. When I showed those technologies that have a lot of adoption, the TikToks and YouTubes, their model is to maximize engagement– not just get the users, but keep them engaged.”
    Professor Ahmed Abbasi [01:09:20→01:09:35]

Artificial IntelligenceMendoza College of BusinessTen Years HenceUniversity of Notre Dame

Stay In Touch

Subscribe to our Newsletter


To receive the latest and featured content published to ThinkND, please provide your name and email. Free and open to all.

Hidden
Hidden
What interests you?
Select your topics, and we'll curate relevant updates for your inbox.