AI Data Annotation – Audio Transcription (RWS)
Performed accurate transcription of audio data on Bengali, converting speech into structured text while maintaining high precision and adherence to guidelines.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
Software Developer with experience in AI training data, backend systems, and model evaluation, test case design. I have worked extensively on large-scale AI training pipelines, contributing to RLHF and supervised fine-tuning projects by analyzing model failures, creating high-quality golden responses, and designing adversarial prompts to expose weaknesses in language models. My background includes hands-on work with Python, JavaScript, and data processing tools, combined with practical experience in OCR, computer vision, and performance optimization. In addition to backend and AI engineering work, I bring solid experience in data labeling and annotation. I have worked on projects involving image labeling, audio evaluation, and transcription, where I assessed both human and AI-generated speech based on tonality, emphasis, intonation, rhythm, and overall quality. My ability to follow detailed annotation guidelines, maintain consistency, and provide nuanced feedback makes me highly effective in improving model performance and training data quality.
Performed accurate transcription of audio data on Bengali, converting speech into structured text while maintaining high precision and adherence to guidelines.
Annotated images using bounding boxes to identify and classify objects for computer vision models. Ensured high annotation accuracy, consistency, and compliance with labeling guidelines across datasets. Contributed to improving object detection model performance.
Evaluated human and AI-generated audio samples to improve speech models. Provided detailed qualitative feedback on tonality, emphasis, intonation, rhythm, pronunciation, and naturalness. Followed strict evaluation rubrics to ensure consistency across large datasets and contributed to improving model output quality.
This role involved designing adversarial prompts and evaluating outputs for failure modes in state-of-the-art language models. High-quality corrective feedback was provided to improve model reasoning, reduce hallucination, and correct tool-usage errors. The experience included authoring golden responses and contributing to both RLHF (Reinforcement Learning from Human Feedback) and Supervised Fine-Tuning (SFT) pipelines. • Designed adversarial prompts and analyzed model performance. • Authored golden responses, correcting reasoning and tool-usage failures. • Participated in SFT and RLHF feedback pipelines. • Completed over 580 production tasks, ensuring strict quality standards.
Bachelor of Science, Computer Science and Engineering
Higher Secondary Certificate, General Education (Science)
Coder
Backend Developer