Comparing two responses
I worked on a Mindrift data annotation project where I compared two AI generated responses and chose the better one based on accuracy, relevance, clarity, and how well it followed the instructions. My role involved carefully reviewing text outputs, spotting factual or logical mistakes, rating each response using set criteria, and giving brief explanations for my decisions. The project handled large volumes of data, so it required strong attention to detail, consistent use of guidelines, and meeting deadlines for assigned tasks. I maintained high quality work by double checking my annotations and applying feedback, helping improve the overall performance and reliability of the AI models.