AI Evaluator & Prompter
Worked as an AI Evaluator & Prompter at Mercor focused on assessing and optimizing AI-generated text outputs. Evaluated prompts and responses for accuracy, relevance, and alignment with guidelines within text-based datasets. Provided prompt engineering and text ranking to improve AI model performance. • Evaluated outputs of generative AI models for quality assurance • Provided feedback and improvements for prompts and completions • Conducted text analysis and classification for model fine-tuning • Collaborated in prompt engineering processes for language models