AI Data Annotation & Code Evaluation (Practice)
Worked on AI training and data labeling tasks focused on evaluating and improving model-generated code in Python and JavaScript. Performed classification, ranking, and validation of AI outputs by identifying logical errors, optimizing performance, and ensuring proper handling of edge cases. Compared multiple responses and selected the most accurate and efficient solutions using structured reasoning, similar to RLHF workflows. Maintained high-quality standards by focusing on correctness, readability, and real-world applicability of generated code.