Video Bounding Box Annotation for Human Activity Recognition
Annotated large-scale video datasets using bounding boxes to track human movements and object interactions for computer vision models. Performed frame-by-frame labeling of actions such as walking, reaching, holding, and placing objects while following strict annotation guidelines. Handled edge cases including partial occlusions and fast motion to maintain labeling accuracy. Completed hundreds of video segments with consistent quality checks, achieving high precision and meeting project turnaround deadlines for AI training workflows.