AI Data Evaluation and Annotation (Volunteer, Academic, or Project-based)
Performed evaluation and annotation of AI-generated text responses for accuracy, relevance, and clarity. Supported AI model training workflows through dataset validation and prompt analysis. Compared outputs from multiple Large Language Models (LLMs) and provided quality ratings based on established guidelines. • Assessed factual consistency and language quality of model outputs. • Annotated datasets to improve AI training data. • Applied prompt analysis techniques to assist with LLM fine-tuning. • Used guideline-based reasoning to ensure annotation consistency.