AI Text Annotation and Model Response Evaluation
As part of my AI learning and practical training, I worked on text annotation and response evaluation projects that used AI-generated outputs and prompts. The project entailed reviewing model-generated responses, evaluating their accuracy, relevance, clarity, and consistency with instructions, and assigning suitable quality ratings using predefined parameters. The tasks included categorising responses, finding factual errors, assessing reasoning quality, and suggesting modifications where necessary. I also checked prompt-response pairings to ensure they fulfilled the expected requirements for correctness and coherence. The research entailed assessing several prompts and responses from various subject areas, which necessitated close attention to detail and consistent implementation of evaluation rules. Structured review methods were used to ensure quality, including cross-checking outputs against task instructions and establishing objective scoring based on defined annotation criteria.