AI Trainer & Data Annotator – Prompt Evaluation and Text Rating
Wrote and evaluated prompts for machine learning models to determine the usefulness and relevance of generated outputs. Performed written review and critical analysis of LLM outputs compared to prompts, ensuring clarity, accuracy, and value of text responses. Applied structured rating criteria to judge output utility and provided written feedback for model improvement. • Designed prompts for structured AI evaluation. • Assessed text outputs for correctness and clarity. • Provided reasoned preferences in pairwise model output comparisons. • Delivered concise written feedback for dataset curation.