AI Trainer and quality assessor
This role involves authoring and quality testing content for large language models, focusing on generating instructions, rubrics, AI responses, and model answers. Feedback and ratings were provided to enhance model performance and accuracy. Subject matter areas included computational thinking, algorithms, data structures, English language, media, communication, business, and digital technology. • Authoring and evaluating AI-generated text responses • Creating system instructions and rubrics for LLMs • Critical rating of model outputs and providing feedback • Managing workload independently and meeting project deadlines.