Machine learning Evaluator
Enhancing the model accuarcy using deep learning techniques and feature extraction techniques. (Optimizing Kaggle competition problem statements)
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
I am an AI/ML engineer with real-world experience contributing to applied AI systems at Alignerr and Soul AI, where I worked directly with datasets, annotation workflows, and model evaluation pipelines. My background in NLP, embeddings, and classification models gives me a strong understanding of how high-quality labeled data affects downstream model accuracy, consistency, and reliability. Through my work on skill-matching systems, developer intelligence modules, resume analysis, and mood-based recommendation engines, I have routinely performed data cleaning, validation, error flagging, edge-case handling, and guideline-based annotation. I bring a detail-oriented and structured approach to labeling tasks, ensuring each dataset follows the exact taxonomy, policies, and quality standards expected by production-grade AI systems. With hands-on experience preparing training data, validating outputs from ML models, and analyzing failure modes, I can reliably identify inconsistencies, ambiguous cases, and patterns that impact model performance. This combination of technical ML understanding and careful annotation discipline makes me well-suited for AI training data roles within Scale AI, Outlier, Remotasks, TensorOps, and similar platforms.
Enhancing the model accuarcy using deep learning techniques and feature extraction techniques. (Optimizing Kaggle competition problem statements)
Optimising LLM performance through RLHF
Bachelor of Technology, Computer Science & Engineering
Faculty Research Intern