Independent AI Content Evaluation Project
- Evaluated AI-generated written responses for clarity, factual accuracy, tone, grammar, and logical consistency. - Compared multiple AI outputs and identified strengths, inaccuracies, bias risks, and communication gaps. Edited and rewrote AI-generated text to improve readability and accessibility. Applied structured rating criteria and documented feedback patterns. - Assessed response coherence, quality, and ability to follow instructions. - Practiced prompt engineering to test the effect of wording on AI response. - Conducted fact-checking using academic and public sources. - Reported on common response issues including repetition and hallucinations.