AI Data Annotation and Text Evaluation – Project Hedgehog
Worked on Project Hedgehog performing text annotation and evaluation tasks to help improve the accuracy of large language models. Responsibilities included reviewing prompts, labeling text data, and evaluating AI-generated responses for relevance, clarity, and correctness. The project involved analyzing language patterns and categorizing responses according to project guidelines. I followed strict quality control standards to ensure consistency and accurate labeling across datasets. This work helped improve AI performance in understanding natural language and generating more reliable responses.