Data Annotator
Contributed to an AI data annotation and model training project focused on improving prompt–response accuracy and contextual understanding in large language models. The project followed the STF (Source–Task–Feedback) framework to ensure data quality and performance optimization. Source: Collected and curated datasets from multilingual text and visual content to build diverse and representative training material. Task: Created, labeled, and evaluated prompt–response pairs for natural language generation, reasoning, and summarization tasks. Annotated data for sentiment analysis, intent detection, and content classification. Feedback: Conducted quality assurance reviews, provided structured feedback on model outputs, and refined prompts to enhance response accuracy, coherence, and tone alignment. This project combined human annotation expertise with iterative AI evaluation to strengthen model reliability and reduce error rates across English and multilingual datasets. Tools used inclu