Adversarial Attacks on Large Language Models

Join us for a captivating virtual event, “Adversarial Attacks on Large Language Models,” featuring distinguished speaker Zico Kolter, Associate Professor of Computer Science and Chief Scientist of AI Research for the Bosch Center for AI at Carnegie Mellon University. Professor Kolter will delve into the intriguing world of adversarial attacks, exploring vulnerabilities and defenses in large language models. Gain valuable insights into the challenges of securing AI systems against malicious manipulation and deception. Don’t miss this opportunity to deepen your understanding of the complex dynamics at play in the realm of AI security with a leading expert in the field.***

Join us Friday, April 26, 2024 here on ThinkND.

***This description was written by ChatGPT with the following prompt: “Write a 100 word description of a virtual event titled “Adversarial Attacks on Large Language Models” featuring speaker Zico Kolter, Associate Professor of Computer Science and Chief Scientist of AI Research for the Bosch Center for AI at Carnegie-Mellon University in Pittsburgh.”

Register below to receive emails about future events in the series!

Artificial IntelligenceMendoza College of BusinessTen Years HenceUniversity of Notre Dame

Stay In Touch

Subscribe to our Newsletter


To receive the latest and featured content published to ThinkND, please provide your name and email. Free and open to all.

Hidden
Hidden
What interests you?
Select your topics, and we'll curate relevant updates for your inbox.