AI Content Evaluation & Prompt Development (Independent Projects)
In this independent project, I evaluated and scored AI-generated responses using structured criteria. I identified various failure points and produced improved outputs to enhance model performance. I designed and tested prompt variations across different instruction styles. I consistently applied annotation standards to various text-based tasks. • Evaluated AI model outputs for accuracy, clarity, and reasoning • Fact-checked and corrected misleading or incomplete responses • Designed prompts to test model performance variations • Applied consistent standards across Q&A, summaries, and reasoning tasks.