Multimodal AI Training: Autonomous Driving + Conversational AI Dataset
Led annotation for 50K+ multimodal datasets powering self-driving perception systems and multilingual chatbots. CV: Annotated 30K urban scene images/videos (2D bboxes + segmentation) with 98.7% QA accuracy NLP: Labeled 20K user queries for intent classification (EN/ES/IT) across automotive/retail domains Managed 5 annotators, created guidelines, and reduced error rate by 22% via iterative feedback Key outcome: Trained models achieving <0.15% false positive rate in production