Customizing Generative AI Models
Duration: 3 days / 24 Hrs
Introduction
This 24-hour, hands-on course focuses on customizing Generative AI models to behave the way your applications need—using prompt engineering, RAG, parameter-efficient fine-tuning, and agentic workflows. The emphasis is on practical customization techniques rather than black-box model usage.

Objectives
By the end of this course, participants will be able to:
- Understand how LLMs work and where customization fits
- Customize model behavior using prompts, RAG, and PEFT techniques
- Evaluate and benchmark GenAI models using task-specific metrics
- Build scalable, agentic AI workflows using LangGraph

Key Takeaways
Participants will leave with:
- A practical toolkit for customizing LLM behavior without full retraining
- Hands-on experience with RAG and parameter-efficient fine-tuning
- Confidence to choose the right customization strategy for real use cases

Training Methodology (Learning by Doing)
- Hands-on labs throughout—every concept is applied immediately
- Incremental builds from prompt tuning to fine-tuned and agentic systems
- Realistic datasets and scenarios instead of toy examples
- Experiment, measure, refine using evaluation-driven development
Course Outline:
Understanding Generative AI
- Definitions: Intelligence, AI, Generative AI
- Differences from Traditional AI
- Tokenization and how GenAI works
- Benefits and challenges
- Popular models and frameworks
- LLM settings and parameters
Prompt Engineering
- Direct prompting (zero-shot)
- One-shot and few-shot examples
- Chain-of-Thought and Tree-of-Thoughts
- Common mistakes and refinement techniques
Building LLM-Based Applications
- Design building blocks
- Accessing LLMs via APIs
- Prompt templates
- Conversational completion models
- Batch APIs and cost control
Evaluating Generative AI Models
- GenAI vs Predictive AI evaluation
- Metrics and benchmarking
- Custom criteria and chat-specific metrics
Prompt Engineering for Customization
- Theory and behavior modification
- Automatic and dynamic prompt generation
- Troubleshooting and refinement
Retrieval Augmented Generation (RAG)
- Embeddings and indexing
- Retrieval techniques: vector, full-text, fusion
- Filtering, reranking, looping
- Contextual generation
Parameter-Efficient Fine-Tuning (PEFT)
- Comparison with traditional fine-tuning
- Prompt-based and low-rank adaptation techniques
- IA3 and other methods
- Synthetic data generation and evaluation
- Bias and data balance considerations
Agentic AI with LangGraph
- LangGraph principles
- Multi-agent workflows
- Communication, coordination, error handling
- Scaling agentic applications