In the third of eight lectures in the Ten Years Hence Speaker Series, esteemed guest Professor Ahmed Abbasi, Joe and Jane Giovanini Professor of IT, Analytics, and Operations at Notre Dame’s Mendoza College of Business, leads a thoughtful foray into the intertwined worlds of artificial intelligence (AI) and social welfare. Introduced by Professor James S. O’Rourke, Abbasi presented a lecture titled “AI for Social Good – How Do We Get There?”, offering listeners a profound analysis of AI’s trajectory and its potential to serve the common good.
The dialogue commenced with a historical overview, highlighting the technological milestones from 2007-2008 that set the stage for the present-day data explosion. With smartphones and an increasing reliance on digital platforms, such as Wikipedia and social media, the data generation reached unprecedented levels. This surge has been instrumental in facilitating the growth of language models like GPT-4, leading the charge with over 1.75 trillion parameters. However, such remarkable advancements carry a caveat; as Abbasi remarked, there is a disproportionate concentration of research focused on optimization rather than leveraging AI’s potential for social good. This observation spotlighted the urgency for a “common good framework,” a directive promoting the accessibility of public datasets, success metrics, and safeguards against the inadvertent misuse of data science.
The event moved into a critical examination of the inherent biases and the detrimental “publish or perish” culture pervading academia which, as Abbasi and O’Rourke concurred, stymie the progress towards research that serves the public interest. Illustrations of this point were the initiatives like ClimateBench and AI Fairness 360, both cited as exemplary frameworks potentially guiding research towards more tangible benefits for societal welfare.
At the heart of the discussion was the what, why, and the how of propelling AI into a force for societal good, with Abbasi’s academic and professional background providing a sturdy platform for his insights. His address swept across four key areas: the pivotal moment for AI credibility; the guiding principles for AI application in society; defining both the destination and the starting point for this journey; and a proposed roadmap for beneficial AI evolution.
As the narrative unfolded, listeners were taken on a journey through the AI landscape, from its digital dominion over language and visuals to the remarkable adoption rate following the release of ChatGPT in 2022. The discussion took a light-hearted twist when Prof. Abbasi humorously considered the prospect of admitting robots to MBA programs – an anecdote that brought a moment of levity to an otherwise earnest conversation.
The episode then delved into the contentious issues of AI and intellectual property ownership, as well as the moral obligations of internet access as a driver for education and equitable opportunity. As the corporate landscape vies for shareholder approval, the professors noted a palpable tension between profit motifs and commitments to Environmental, Social, and Governance (ESG) endeavors.
Abbasi provided a real-world application of AI for social good in an AI app he is developing to address violence in South Bend. This solution aims to connect at-risk individuals with necessary resources and provide AI-driven therapeutic assistance. This marked a critical juncture in the discussion, juxtaposing the transformative potential of AI against the looming specter of ethical ambiguity, particularly in predictive applications.
The episode crystallized around an engaging exchange when Professor O’Rourke prompted a discourse on balancing AI’s convenience with ethical imperatives. This segued into the broader theme of innovating responsibly, and the inherent challenges of marrying these two objectives in a rapidly advancing technological domain.
Abbasi painted a statistical panorama of AI’s academic landscape, demonstrating its prolific growth counterbalanced by a concerning decline in the intersection with social good research. This graphical revelation underscored the stark reality of the current trajectory away from endeavors that might materially benefit society.
Further, the professors highlighted innovative uses of AI in social studies, such as Yelp’s introduction of features in 2020 that enabled business owners to self-identify race and gender. This brought about research on bias in economic contexts, manifesting a shift towards research that captured intersectional issues in society.
Mental health was underscored as a domain conspicuously overrun by diagnostic research within AI, though interventions and pragmatic solutions lagged behind. This depicted a research landscape tilted towards identification over action – a gap the professors urged to be closed through non-industry funded programmatic research.
Language models emerged as informational lighthouses, illuminating societal perspectives on poverty, gender, and ethnicity – but not without the shadow of widespread human biases influencing these digital constructs. Notre Dame’s Humanitarian Operations Management Lab (HOPE) serves as a beacon in this field, endeavoring to ameliorate water access issues in sub-Saharan Africa through the lens of AI.
Closing the discussion, Abbasi articulated a clear vision for the future of AI – one guided by an ethical compass and invested in alleviating societal disparities. A future where technology, ethics, and social justice converge to navigate an increasingly complex and interconnected world.