LLM Output Evaluation & NLP Data Annotation
Annotated and reviewed AI-generated text for Large Language Models (LLMs). Checked outputs for accuracy, relevance, and clarity, and labeled prompt-response pairs to improve model training. Ensured high-quality, consistent, and well-structured data to help fine-tune and evaluate AI systems.