AI Data Annotator & LLM Response Evaluator (Practice & Professional Overlap)
Evaluated outputs and responses generated by AI models for clarity, logical consistency, and accuracy. Applied defined rules and guidelines to assess AI-generated textual data, identifying errors and inconsistencies. Provided detailed feedback to improve model performance and annotation quality. • Followed stringent structured guidelines during assessment. • Worked with large text datasets to maintain high accuracy in evaluation. • Collaborated with cross-functional teams for continuous improvement in annotation workflows. • Used internal and Excel-based proprietary tools for evaluation and quality assurance.