AI Data Labeling & Model Evaluation Project
Annotated and evaluated text datasets to support machine learning model training, including tasks such as text classification, sentiment analysis, and response ranking. Applied structured annotation guidelines to ensure consistency and high accuracy, while performing quality assurance checks to identify and correct errors. Contributed to improving model performance by flagging ambiguous cases and providing feedback on data quality and labeling criteria.