Translator / AI Evaluator
I reviewed and evaluated multilingual AI-generated outputs for accuracy, fluency, and cultural relevance. My work focused on assessing LLM responses and improving overall language quality. I contributed to the development of robust, language-specific evaluation guidelines for large-scale AI deployments. • Ensured output accuracy in multiple languages • Strengthened LLM evaluation methodologies • Enhanced language quality in AI systems • Supported continuous improvement through feedback cycles.