About Course
π Build LLM-Powered Applications Like a Pro!
Welcome to the Open Sourced version of my course on LLMs.
This course is one of the top-rated technical courses on building Large Language Model (LLM) applications from the ground up. So far, Iβve taught this course to over 1500 professionals, at MAVEN, Stanford, UCLA and University of Minnesota, helping them gain a deep understanding of Transformer Architecture, Retrieval-Augmented Generation (RAG), and open-source LLM deployment.
Unlike most courses that focus on pre-built frameworks like LangChain, this course goes beyond by diving into the building blocks of retrieval systems, enabling you to design, build, and deploy your own custom LLM-powered solutions.
π Also featured in Stanford’s AI Leadership Series:
π Stanford AI Leadership Series – Building and Scaling AI Solutions
π Learning Outcomes
Gain a comprehensive understanding of LLM architecture
Construct and deploy real-world applications using LLMs
Learn the fundamentals of search and retrieval for AI applications
Understand encoder and decoder models at a deep level
Train, fine-tune, and deploy LLMs for enterprise use cases
Implement RAG-based architectures with open-source models
π’ Who is This Course For?
This course is not for beginners. It requires:
β
Python programming skills
β
Basic machine learning knowledge
It is designed for:
πΉ Machine Learning Engineers
πΉ Data Scientists
πΉ AI Researchers
πΉ Software Engineers interested in LLMs
π What Youβll Learn
β Collect and preprocess data for LLM applications
β Train and fine-tune pre-trained LLMs for specific tasks
β Evaluate model performance with appropriate metrics
β Deploy LLM applications via APIs and Hugging Face
β Address ethical concerns in AI development
π Whatβs Included?
β
29 in-depth lessons covering LLM architectures and RAG techniques
β
6 real-world projects to apply your learnings
β
Interactive live sessions and direct instructor access
β
Guided feedback & reflection
β
Private community of peers
β
Certificate upon completion
π’ Attribution & Credits
If you use my course material, content, or research in your work, please credit me and the respective contributors.
πΉ Proper citation format:
Farooq, H. (2024). Building LLM Applications from Scratch
Stanford Continuing Studies: The AI Leadership Series
π Tagging & mentions are always appreciated! π
π
Course Syllabus
Week 1: Introduction to NLP
Understanding natural language processing fundamentals
Tokenization, embeddings, and vector representations
Week 2: Transformers & LLM System Design
The evolution of Transformer models
Understanding encoder-decoder architectures
Week 3: Semantic Search & Retrieval
Implementing vector search for LLM applications
Introduction to RAG-based architectures
Week 4: Building a Search Engine from Scratch
Developing a custom RAG solution
Optimizing search and retrieval pipelines
Week 5: The Generation Part of LLMs
Fine-tuning models for text generation tasks
Optimizing inference for real-time applications
Week 6: Prompt-Tuning, Fine-Tuning & Local LLMs
Techniques for efficient inference & quantization
Deploying custom LLMs at scale
π Post-Course: Demo Day β Present your final project!
Course Content
Building LLM Applications from Scratch
-
LLM
00:00