AI Output Evaluator and Prompt Annotator
I regularly engaged in output evaluation and hallucination spotting using leading language models such as ChatGPT, Gemini, and Claude. This involved critically assessing AI-generated content for accuracy, coherence, and alignment with guidelines. My experience also includes prompt experimentation and data accuracy assessment for AI training and annotation. • Evaluated text outputs and provided clear feedback for prompt improvement • Maintained high consistency and attention to labeling accuracy • Applied critical thinking to identify and flag hallucinations or errors • Utilized a detail-oriented mindset to ensure data quality in AI-related tasks