Coding Tasker
At Outlier.ai, I was responsible for generating coding prompts and evaluating AI-generated responses as part of our efforts to enhance large language models (LLMs). This project focused on improving the accuracy and relevance of coding-related tasks, ensuring that the models could handle complex coding challenges effectively. Role and Responsibilities: - Generated diverse and challenging coding prompts across multiple programming languages to train AI models. - Evaluated AI-generated coding responses, assessing their correctness, efficiency, and adherence to prompt requirements. - Worked closely with the data labeling and engineering teams to refine prompt structures and improve response evaluation criteria. - Analyzed common errors in AI responses and provided feedback to improve the model’s performance in future iterations.