Research & Evaluation Contributor (Prolific)
I participated in AI and academic research studies requiring critical evaluation and detailed written responses for model assessment purposes. My work involved thorough reading, comprehension, and the production of well-articulated feedback or answers used for AI benchmarking. Performance was measured through approval ratings and consistency across varied test cases. • Provided high-quality text evaluations for research tasks. • Demonstrated strong comprehension and attention to detail. • Assisted in benchmarking AI outputs against rubrics. • Consistently met strict standards for clarity and conciseness.