Video Annotator
Project Scope: Atlas Capture – Video Annotation Scope of the Project Large-scale AI training initiative centered on video annotation for hand actions. Objective: create high-quality datasets to train computer vision models for gesture recognition, robotics, and human–computer interaction. Involved annotating thousands of short video clips across varied environments, lighting conditions, and participant demographics. Specific Data Labeling Tasks Hand Action Annotation: Identified and labeled distinct hand movements (e.g., pointing, grasping, waving, tapping). Differentiated between single-hand and two-hand actions. Marked start and end frames for each action to ensure temporal precision. Contextual Metadata: Tagged relevant background elements (e.g., objects being manipulated). Classified clips by action type, duration, and complexity. Quality Control Tags: Flagged unclear or ambiguous clips for secondary review. Applied “no action” labels where appropriate to maintain datas