LLM Text Annotation and AI Response Evaluation project
I worked on ai training project focusing on improving Large Language Models through high-quality text annotation and response evaluation, few of the task include labeling, classifying text data, evaluating ai generated response for accuracy, relevance, tone and safety, and ranking multiple outputs based on quality guideline. I contributed to improve model performance by identifying errors, biases and inconsistencies in generated content while maintaining strict quality standard.