AI Text Evaluation & Annotation for Language Model Tuning
As part of my work at Alignerr, I participated in a large-scale AI training project focused on improving the performance of generative language models. My tasks included evaluating AI-generated responses for coherence, factuality, and tone; classifying prompts and outputs; and providing structured feedback to fine-tune model behavior. Additionally, I contributed to labeling datasets for educational applications, including identifying correct answers, generating sample questions, and rating the complexity or emotional tone of responses. The project involved thousands of data samples and strict quality assurance measures such as cross-check reviews and calibration sessions.