Comparing answers
In another project, I was responsible for comparing pairs of AI-generated responses to the same prompt. The objective was to determine which answer was more appropriate based on several criteria, including grammar, factual accuracy, clarity, fluency, relevance to the prompt, and cultural alignment with a Portuguese audience. This required not only linguistic precision but also critical thinking and contextual judgement. The task demanded consistent application of detailed evaluation guidelines and careful attention to subtle differences in tone, structure, and content. Although I did not have access to information about the overall size of the project, I completed a high volume of comparisons and received regular quality feedback from project reviewers, which helped refine my approach and maintain a high standard of output.