AI Data Annotation & Model Evaluation Specialist
I supported large language model evaluation and training by generating prompts, reviewing responses, and annotating outputs. My work involved quality assurance for datasets and adhering to structured annotation guidelines for model training. I contributed to prompt testing workflows and collaborated with teams to refine annotation standards. • Labeled and reviewed model-generated text responses for accuracy and consistency. • Created evaluation prompts to assess model output reasoning. • Conducted quality checks on annotated datasets. • Used multiple industry-leading AI training platforms including Scale AI and Labelbox.