ML Research & Applied AI – Outlier AI (Google Brain)
Participated in fine-tuning large language models (LLMs) and developing self-supervised learning mechanisms for structured tabular data. Co-developed an internal variant of the SAINT architecture, focused on efficient representation learning for text-based classification tasks. Contributed to model deployments and benchmarking in real-world environments.• Led model training data pipeline construction and validation processes.• Ensured dataset quality and diverse coverage for LLM fine-tuning.• Conducted performance evaluation and participated in peer-reviewed research dissemination.• Supported productionization of models involving labeled data for Google Shopping ranking tasks.