Code Annotation for Automated Code Review System
This project focused on annotating a large dataset of source code to enhance an automated code review system. The tasks included labeling function calls, detecting and annotating syntax errors, and classifying code snippets into various categories (e.g., sorting algorithms, data manipulation). Over 20,000 code samples were annotated using CVAT and Prodigy, with a focus on accuracy and consistency to improve the AI model’s ability to identify potential issues and provide meaningful feedback. Quality measures included rigorous testing and validation of annotations, as well as regular updates to the labeling guidelines based on feedback from the development team.