prompt evaluation and writing
Worked on programming-focused data labeling projects through Remotasks, where the scope involved annotating and validating datasets for computer science and AI model training. Specific tasks included code classification, function labeling, debugging assistance annotations, and prompt–response writing for programming tasks. The projects required attention to detail in identifying correct logic, syntax, and expected outputs across multiple coding languages. Collaborated in large-scale datasets aimed at improving AI’s ability to understand and generate programming solutions. Maintained high quality standards by following strict task guidelines, performing peer reviews, and ensuring accuracy and consistency above 95% benchmark quality metrics. Successfully contributed to projects in clinical medicine and computer science applications, supporting the development of reliable AI systems.