Video Data Annotation & Evaluation (Associate AI Engineer, AI Singapore)
I coordinated and executed batch video generation experiments to produce human-labeled ground truth data for evaluation. This involved manually annotating 380 videos to support benchmarking the quality of a GenAI video generation pipeline. The labeled dataset enabled rigorous evaluation of pipeline outputs against industry-standard metrics. • Human annotation was done for video evaluation and benchmarking purposes. • Data labeling was aimed at establishing ground truth for automated evaluation. • Manual review and annotation ensured accuracy and reliability. • The project used internal/proprietary tooling on Azure infrastructure.