AI Data Annotator and Prompt Evaluator
Reviewed and rated AI-generated text responses using established quality metrics. Evaluated outputs for accuracy, relevance, fluency, and safety, ensuring adherence to project guidelines. Labeled and annotated text datasets to train and improve machine learning models. • Compared AI outputs and selected the most accurate responses. • Identified errors, bias, and inconsistencies in AI-generated content. • Rewrote prompts to enhance AI response quality and relevance. • Applied structured evaluation guidelines and provided feedback for model improvement.