GENERATIVE AI MASTERY
Personal training for Experienced Data Professionals only
Personal training for Experienced Data Professionals only
Course Length: 2 Months
Monday to Friday: 3 hrs everyday (8 pm to 11 pm)
Format: Personal Training (Live Online)
Fee: 95,000/-
Machine Learning & Deep Learning Refresher
Review key ML and DL concepts, with a focus on supervised/unsupervised learning, CNNs, RNNs, and gradient descent.
Project: Build and evaluate a basic image classification model.
Transformers: From Neural Networks to Attention Mechanisms
History and evolution: From neural networks to RNNs, LSTMs, and finally Transformers.
Detailed breakdown of self-attention and transformer architecture.
Project: Implement a simplified transformer for a text classification task.
Exploring Transformer Architectures
Study various transformer models, including BERT, GPT, and T5, to understand their unique architectures and use cases.
Project: Compare and fine-tune BERT and GPT models on a text-based task to see differences in performance.
Advanced Use of Pre-Trained LLMs
Hands-on with major pre-trained LLMs (e.g., GPT-4, LLaMA) and understanding prompt engineering.
Experiment with prompt tuning and parameter adjustments for different tasks.
Project: Design prompts for LLMs to perform specific business-oriented tasks (e.g., summarization, Q&A).
Components of LLMs: Tokenizers and Embedding Models
Understanding tokenization and embedding creation, with emphasis on subword tokenization.
Project: Build a custom tokenizer and simple embedding model.
Creating a Vector Database and Integrating Search
Step-by-step creation of a vector database for storing and searching embeddings.
Practical implementation with libraries such as FAISS or Pinecone.
Project: Build a mini search engine using custom embeddings and a vector database.
Retrieval-Augmented Generation (RAG) and Its Components
Breakdown of RAG architecture, combining LLMs with vector databases for enhanced QA capabilities.
Project: Implement a basic RAG pipeline to answer questions from a custom dataset.
Finetuning Techniques for LLMs and Embeddings
Methods for finetuning LLMs, embeddings, and tokenizers (including LoRA, QLoRA).
Hands-on with distributed finetuning using multi-GPU clusters.
Project: Finetune an embedding model on domain-specific data, analyze results.
Evaluating LLM and Embedding Models
Explore techniques for model evaluation, including perplexity, accuracy, and human evaluation for generative tasks.
Project: Create a framework for evaluating different LLMs based on specific metrics.
Multi-GPU and Distributed Computing for Scaling Models
Learn to distribute workloads with PyTorch DDP, Data Parallel, and FSDP.
Project: Scale the fine-tuning of a transformer model on multiple GPUs and assess improvements.
Portfolio Projects
Curate projects from the Learning Phase to highlight generative AI skills.
Focus on creating GitHub repositories with well-documented code, sample outputs, and explanations.
LinkedIn & GitHub Optimization
Update LinkedIn profile with relevant skills, project links, and descriptions of hands-on generative AI work.
Optimize GitHub profile with showcased projects, README files, and tutorials for better visibility.
Resume Tailoring for Generative AI Roles
Emphasize key generative AI skills and projects on the resume.
Include certifications, contributions to open-source generative AI projects, and any notable achievements in AI.
Explaining Generative AI Projects and Contributions
Practice presenting key projects, with a focus on the business impact and technical challenges.
Develop short, effective responses for behavioral and technical questions regarding generative AI.
Technical Discussions on Generative AI
Prepare for technical discussions on foundational concepts, architectures, and recent trends in the field.
Practice explaining complex concepts (e.g., attention mechanisms, tokenization) in a simplified manner.
Role-Specific Skill Emphasis
Highlight skills relevant to various roles in generative AI, such as model evaluation, deployment, and real-time application integration.
Mock Technical Interviews
Conduct mock interviews focusing on system design, algorithmic skills, and detailed explanations of generative AI projects.
Case Studies and Problem-Solving Exercises
Work on problem-solving exercises and case studies related to generative AI, such as designing a custom LLM-based solution for a business case.
Advanced Coding Challenges and Whiteboarding
Tackle coding exercises involving transformers, NLP, and embeddings.
Practice whiteboarding sessions to articulate solutions for architecture-related questions in generative AI.
Behavioral and Soft Skills Practice
Prepare for behavioral interviews with a focus on communication, teamwork, and real-life examples of handling complex projects.
Market Research on Compensation
Research industry standards for generative AI roles based on region, company size, and role requirements.
Highlighting Unique Skills
Emphasize specialization in advanced generative AI topics (e.g., distributed finetuning, embedding creation, RAG pipeline building).
Negotiation Practice
Practice negotiation scenarios to confidently discuss benefits, perks, and salary adjustments.