Search Engine Evaluation and Text Annotation for AI Model Improvement
As part of a large-scale AI training initiative on an online platform, I contributed to a search engine evaluation and text annotation project aimed at enhancing the relevance and accuracy of a large language model (LLM). My role involved classifying and rating search query responses based on relevance, intent, and factual accuracy, as well as annotating text datasets for prompt-response pairs to support supervised fine-tuning (SFT). The project encompassed over 10,000 text entries, requiring meticulous attention to detail to ensure high-quality annotations. I adhered to strict quality measures, including cross-validation with team leads and maintaining a 98% accuracy rate in evaluations, directly impacting the model’s ability to deliver precise search results. My fluency in English, Hindi, and Gujarati enabled me to handle multilingual text annotations, ensuring culturally relevant and accurate classifications.