AI Prompt Evaluation and Data Annotation
Worked as an AI Data Trainer contributing to the evaluation and improvement of large language model outputs. Performed prompt-response analysis and quality evaluation tasks to assess accuracy, relevance, and clarity of AI-generated responses. Followed detailed annotation guidelines to ensure consistency and high-quality labeled datasets used for model training and evaluation.