LLM Fine-tuning and Prompt Engineering (Generative AI Professional Track)
I designed and deployed end-to-end Retrieval Augmented Generation (RAG) pipelines using large language models on multi-document corpora. My work involved building and fine-tuning LLM-based systems and applying advanced prompt engineering for NLP tasks such as classification and summarization. I also trained generative models using real-world, human-in-the-loop learning scenarios in a structured, production-like environment. • Structured and engineered prompts specifically for AI models focused on NLP classification and summarization. • Fine-tuned LLMs using Hugging Face Transformers and PyTorch for custom text generation applications. • Designed pipelines that explicitly incorporated data collection, labeling, and evaluation cycles. • Engaged in iterative model improvement including feedback loops with human judgments for supervised fine-tuning.