Freelance AI Data Annotator & Evaluator (Remote)
I review and evaluate AI-generated outputs from large language models based on detailed project guidelines. My responsibilities include annotating, validating, and correcting AI outputs to ensure high accuracy and adherence to standards. I also perform logical consistency checks and label datasets for AI model training and improvement. • Identify hallucinations, missing conditions, and reasoning errors in structured and text-based outputs. • Maintain high productivity and accuracy scores in asynchronous remote workflows. • Apply rigorous quality assurance methods to guarantee quality training data. • Consistently deliver reliable labeling on large-scale projects.