AI Prompt Engineer, LLM Evaluator & Data Annotator
As an AI Prompt Engineer, LLM Evaluator, and Data Annotator, I design and develop high-quality prompts for constructing LLM training datasets. I label, classify, and structure raw text data to ensure clean, reliable inputs for AI model development. I also evaluate, score, and provide feedback on AI-generated outputs using structured rubrics and contribute to RLHF pipelines. • Authored structured prompts and grading rubrics aligned with task requirements and model objectives. • Delivered human feedback for reinforcement learning from human feedback (RLHF) workflows. • Annotated, classified, and organized large-scale text datasets for model training and evaluation. • Iterated on datasets in response to reviewer feedback, maintaining quality and accuracy standards.