Video labelling
This project focuses on annotating and reviewing egocentric (first-person) videos that capture humans performing physical tasks in real-world environments. Each video is segmented into distinct action-based events, representing specific activities carried out by the camera wearer (ego). The overall goal is to produce high-quality, structured annotations that accurately describe human actions, object interactions, and temporal boundaries within each segment, enabling effective training of computer vision and activity recognition models. As a reviewer, I evaluated segment-level text annotations to ensure accuracy and consistency. My responsibilities included verifying and correcting action labels, identifying the primary activity performed by the ego, confirming the correct objects involved in each interaction, and ensuring timestamps precisely aligned with the start and end of each segment. I also ensured annotations adhered strictly to project guidelines, resolving ambiguities and maintaining uniform labeling standards across the dataset. I have reviewed and annotated over 500 video segments, demonstrating extensive experience with large-scale data annotation workflows and sustained consistency across diverse task scenarios. Quality assurance was maintained through strict guideline compliance, consistency checks, and attention to detail in action-object mapping and timestamp accuracy. Emphasis was placed on minimizing labeling errors, ensuring inter-annotator consistency, and maintaining high precision in segment boundaries to support reliable model training outcomes.