Senior AI Engineer and Data Labeling Specialist
Evaluated outputs of Large Language Models (LLMs) to ensure responses were accurate, safe, and aligned with human values. Conducted rigorous assessments of AI system behaviors for bias, fact-checking, and compliance with quality guidelines. Participated in Reinforcement Learning from Human Feedback (RLHF) and reward model testing to improve model reliability and real-world performance. • Reviewed AI-generated outputs for quality control and human alignment. • Contributed to labeled dataset curation for region-specific improvements. • Executed hallucination detection protocols within QA pipelines. • Provided prompt engineering for dataset annotation and AI training.