AI Data Annotation and Quality Evaluation Project
Worked on AI data labeling and quality evaluation tasks to support machine learning model training. Responsibilities included reviewing and correcting image annotations such as bounding boxes, identifying objects and relationships, evaluating spatial reasoning outputs, assessing transcription accuracy, and selecting best-matching responses based on detailed task guidelines. Tasks required strict adherence to instructions, consistency checks, and careful attention to edge cases to ensure high-quality labeled data. Quality was maintained by following platform-specific standards, verifying completeness of annotations, and avoiding inclusion of irrelevant or ambiguous elements.