Artificial Intelligence (AI) systems, especially those based on Machine Learning (ML) techniques, are becoming more ubiquitous in our society. Specifically, many ML techniques are now embedded within critical infrastructure such as healthcare, energy, national security, transportation, and finance. While ML techniques provide insights and efficiencies, they also introduce a new threat landscape with a greater number of attack vectors. Ensuring that AI systems are secure, private, safe, fair, explainable, and aligned, is becoming more difficult.
Sophisticated cyber-attacks are raising serious concerns as adversaries can manipulate training data (data poisoning), nudge models into making incorrect (evasion) or undesirable (jailbreaking) outputs, exfiltrate models (model extraction) or sensitive information about training data (model inversion and membership inference). Real-world cases already demonstrate these risks: carefully crafted adversarial images have fooled autonomous driver systems into misreading traffic signs, and researchers have shown how subtle data poisoning can compromise fraud detection models used in financial institutions.
These threats undermine the cybersecurity and privacy of AI systems, making them less trustworthy. It is crucial to protect these systems to ensure their integrity in decision-making processes and maintain public trust.
Join our free webinar on October 24, 2025 from 12–1 p.m. ET, to deepen your understanding of trustworthy AI and safeguard your systems against emerging threats.
What you'll learn:
- Why trust in AI matters
- Growth in AI adoption, real-world consequences, and regulatory trends.
- Understanding the threat landscape in AI systems
- Systematically identifying risks to AI systems from adversaries: evasion, jailbreaking, data leakage risks, model misuse and theft, etc.
- Defense design
- ML defense techniques: adversarial training, alignment, differential privacy, watermarking and fingerprinting, etc.
- Software-only defenses vs. hardware-assisted defenses, e.g., using trusted execution environments (TEEs).
- Simultaneous protection against multiple risks: failures due to un-coordinated defenses.
Who should attend:
This free webinar is designed for professionals, researchers, and leaders who are responsible for building, deploying, or overseeing AI systems in environments where security, privacy, and trust are critical.
It will be particularly valuable for:
- Cybersecurity professionals seeking to understand the emerging threat landscape specific to AI systems.
- Data scientists, ML engineers, and AI practitioners who want to build models that are robust and resilient to adversarial behaviour.
- IT and infrastructure managers responsible for safeguarding critical systems in healthcare, finance, energy, transportation, and national security.
- Business and technology leaders who must understand the organizational risk involved with AI systems.
- Academics and researchers exploring the intersection of trustworthy AI, adversarial machine learning, cybersecurity, and privacy.
Whether you are securing real-world deployments, shaping governance policies, or preparing your teams for the future of AI, this session will provide the insights and practical strategies you need to strengthen trust in machine learning systems.
Meet your speakers
Prof. N. Asokan
Professor of Computer Science, University of
Waterloo | David R. Cheriton Chair | Executive Director, Cybersecurity and
Privacy Institute
N. Asokan is a Professor of Computer Science at the University of Waterloo where he holds a David R. Cheriton Chair and serves as the Executive Director of the Cybersecurity and Privacy Institute.
Asokan's primary research theme is systems security broadly, with emphasis on hardware-assisted security, and the interplay between security/privacy and AI.
Asokan is an ACM Fellow, an IEEE Fellow, and a Fellow of the Royal Society of Canada.
For more information about Asokan's work, see his website or social networks.
Sign up to attend the webinar!
Don't miss this opportunity to hear from experts, ask questions, and learn more about how you can advance your IT career.
Questions? Let's chat!
Office hours: Monday to Friday, 8:30 a.m. - 4:30 p.m. ET