Model Response Evaluator/ Prompt Writer
I evaluated AI-generated text responses for accuracy, coherence, and safety in a structured annotation workflow. I wrote and refined prompts to improve model outputs and provided feedback for AI fine-tuning. Annotation guidelines were closely followed to ensure high-quality text data labeling for training and testing language models. • Rated and compared model-generated text outputs. • Identified biases, hallucinations, and errors in AI responses. • Designed and optimized test cases and prompts. • Verified factual accuracy and compliance with ethical standards.