Fishbowl
Writing prompts, rating models answers, developing rubrics for training the models, and re-writing the responses. Editing other annotators' work.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
I developed rubrics for assessing the accuracy and completeness of the responses by four different models, while providing detailed feedback to project managers and junior annotators to improve data quality. Additionally, I conducted research for robust justifications for penalizing unsafe responses, including identifying “jailbreak” conversations. Finally, I tested AI models by iteratively increasing prompt complexity, achieving over a 30% improvement. I also engineered multi-turn prompts with category, persona, semantic, and formatting constraints.
Writing prompts, rating models answers, developing rubrics for training the models, and re-writing the responses. Editing other annotators' work.
Writing prompts to generate model failure, provide LIKERT ranking and re-write the responses.
Annotating client-generated prompts in English, filtering out jailbreaks and harmful prompts.
Specialist Bootcamp, AI And Data Science
Master of Philosophy, Development Studies
AI Expert
RLHF annotator/reviewer