Data Annotator
I worked as a Data Annotator on the Atlas Capture project, focusing on video segmentation for AI training in robotics and computer vision. The project involved reviewing and annotating short first-person (egocentric) video clips of everyday human activities and physical tasks. My specific tasks included: Performing semantic segmentation on video frames to identify and outline objects, hands, tools, and actions. Labeling temporal segments by breaking videos into distinct action events and assigning accurate descriptive labels or categories. Correcting machine-generated pre-labels where necessary to ensure precision in action sequencing, object interactions, and edge cases (e.g., occlusions, rapid movements, or ambiguous actions). I adhered to detailed annotation guidelines, achieving high consistency through double-checks, self-review processes, and strict adherence to quality standards such as pixel-accurate boundaries and inter-annotator agreement principles. This work contributes to training robust AI models for real-world understanding and robotic task execution. The project is ongoing, allowing me to continuously refine my skills in video data labeling.