Large Language Models (LLMs) are transforming how we interact with technology; from powering chatbots and search engines to enabling advanced content generation and semantic understanding. This course provides a practical introduction to the architecture, functionality, and applications of Language Models (LMs), equipping learners with the skills to build, fine-tune, and deploy models for real-world tasks.
Building on foundational machine learning concepts and preparing for advanced topics in generative AI and model deployment, you will explore the inner workings of LLMs, including tokenization, embeddings, attention mechanisms, and transformer architectures. Learners will gain practical experience by building a text classification system and exploring advanced techniques, such as instruction tuning, semantic search, and multimodal model integration.
By the end of this course, you will be equipped to apply LLMs in enterprise settings, enhance AI-driven products, and contribute to NLP and multimodal AI projects.
What you will learn
- Describe the architecture of transformer-based language models
- Prepare and embed training data
- Build and train your own small special-purpose language model
- Supplement your model’s knowledge with additional information via Retrieval-Augmented Generation (RAG)
- Evaluate your language model’s performance
Skills you’ll gain
- Applying pre-trained LLMs to classification and search tasks using Hugging Face Transformers
- Designing classification heads and prompt-based pipelines
- Using vector databases and RAG for semantic retrieval
- Understanding multimodal model architectures and their applications
- Understanding and implementing advanced tokenization methods
Course format
- Prerequisites:
- Completion of Machine Learning course or equivalent prior knowledge.
- Familiarity with PyTorch
- Project: Build a text classification system
- Delivery: Hybrid delivery, instructor-led live sessions with hands-on assignments
Course instructor

Amir Feizpour
Course Instructor, WatSPEED
Dr. Amir Feizpour is the Founder, CEO, and Chief Scientist at Aggregate Intellect, where he is building a generative business brain for service- and science-based companies. He has cultivated a global community of 5,000+ AI practitioners and researchers, driving discussions on AI research, engineering, product development, and responsible AI.
Previously, Amir served as an NLP Product Lead at Royal Bank of Canada and conducted quantum computing research at the University of Oxford, leading to high-profile publications and patents. He holds a PhD in Physics from the University of Toronto.
Beyond Aggregate Intellect, Amir actively contributes to the AI ecosystem as an advisor at MaRS Discovery District and a fractional Chief AI Officer for multiple startups. He is also a sought-after educator and speaker, engaging business executives and developers through training programs.
Under Amir’s leadership, Aggregate Intellect drives cutting-edge R&D through academic collaborations, advancing innovation at the intersection of AI and business.
