Atlas Data Labeling – Video Action Annotation (AI Training Project)
I have worked on video action annotation tasks on Atlas Data Labeling, where I contributed to training computer vision models by labeling human-object interactions in short video segments. In this project, my role involved reviewing videos and accurately identifying the actions performed by the actor (“ego”) with different objects. I segmented videos into appropriate time intervals and created precise labels that describe the interaction between the actor and the object. My responsibilities included: Video segmentation: Breaking videos into logical segments based on when a new action begins or ends. Action labeling: Writing concise labels using the approved verb-object format (e.g., pick up cloth, adjust frame, place object on table). Dense and coarse annotation: Applying the correct level of detail depending on whether the interaction required multiple atomic actions or a single summarized action. Quality control: Ensuring labels followed project guidelines, avoided hallucinated actions, and used observable interactions only. Object identification: Correctly identifying tools, surfaces, and objects involved in the interaction while maintaining labeling consistency. Through this project, I gained strong experience in video annotation, action recognition labeling, segmentation rules, and guideline-based annotation for AI model training. My work focused on producing high-quality annotations that help improve machine learning models’ ability to understand human actions and object manipulation in real-world environments. This experience strengthened my skills in computer vision data annotation, guideline compliance, quality assurance, and attention to detail, which are essential for large-scale AI training datasets.