AI Response Evaluator
Worked on language quality and translation evaluation projects focused on assessing AI-generated and human-translated content. Responsibilities included reviewing source and target texts to ensure semantic accuracy, linguistic fidelity, and tone preservation across languages. Evaluated translations against strict quality guidelines, identifying issues such as mistranslation, omission, added meaning, register mismatch, and contextual errors. Provided structured feedback and quality ratings to support the improvement of multilingual AI systems, with a strong emphasis on consistency, instruction adherence, and cultural appropriateness. Collaborated within a guideline-driven evaluation framework where precision, objectivity, and attention to detail were critical to maintaining high standards of language performance.