AI Trainer: Contract (Mercor)
The project involved determining the type of internal tool call that the final prompt would trigger in a model. The project had over 100 experts from multiple countries.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
I am an experienced AI Trainer and Quality Analyst with a strong background in data labeling, annotation, and AI training data evaluation. My expertise spans stress-testing AI models, creating and applying complex rubrics for LLM response evaluation, and ensuring policy compliance and safety in AI outputs. I have overseen large-scale quality assurance projects, managed diverse global teams, and contributed to continuous process optimization for leading AI companies. My skills include expert rubric generation, root cause analysis, targeted feedback, and workflow optimization, supported by hands-on experience with tools like Excel, Google Sheets, Asana, and Zendesk. I am well-versed in multiple prompting techniques, GenAI ethics, and content evaluation based on industry standards, with a proven ability to deliver high-quality, reliable training data for NLP and search quality domains.
The project involved determining the type of internal tool call that the final prompt would trigger in a model. The project had over 100 experts from multiple countries.
The project involves identifying clickbait on social media posts and classifying/grouping them into an appropriate clickbait bucket. The project was run in at least 20 countries and had hundreds of raters on it. Some of the types of clickbait buckets we used were - Like Baiting, Comment Baiting, React Baiting, Share Baiting, Copy & Paste Baiting, and Friend Tag Baiting.
The project involved cleaning and transcribing transcriptions, mostly ATC transcriptions. The project ran across multiple geographies and had hundreds of contributors.
The project involved onboarding and training new talent across six different countries and five different languages, iterating and improving process workflows. The project involved Page Quality Evaluation of multiple websites provided by the client on dimensions such as Main Content Quality, Reputation, and EEAT (Experience, Expertise, Authoritativeness, Trust).
The project involved crafting single & multi-turn AI prompts and sometimes creating golden responses. The project involved a small group of raters from about a dozen countries. Evaluation metrics included Instruction Following, Relevance, Completeness, Accuracy, Safety, Context Awareness, Writing Style & Tone. Additionally, we performed Side-by-Side comparison of the responses based on the Likert scale.
Bachelor of Technology, Mechanical Engineering
Higher Secondary Certificate, Science
Team Lead
Project Lead