LLM Personalization & AI Training Project
Developed a personalized conversational LLM that adapts to an individual's tone, phrasing, and style through continuous interaction. Designed and executed a hybrid Transformer-GPT-2 architecture, focusing on tailoring model outputs to varied sentiment and topic memory. Established comprehensive pipelines for training and evaluation including SBERT-based semantic similarity and loss-curve tracking. • Implemented dataset expansion with automated sentiment word-frequency tracking • Integrated emotion alignment and answer-vocabulary biasing to enhance LLM personality • Utilized multi-turn memory, topic tracking, and fallback generation for natural conversation • Managed full training and evaluation with embedding alignment and AI-detection scoring.