Text Data Labelling & Annotation Quality Review (Academic + project work)
During my MSc in Data Science and Artificial Intelligence, I worked with multiple text‑based datasets that required careful reading, interpretation, and annotation. In an AI‑bias research project, I reviewed text classification outputs, checked whether model predictions aligned with the intended meaning, and analysed inconsistencies caused by imbalanced training data. This involved manually verifying labels, rewriting ambiguous text, and documenting corrections clearly for both technical and non‑technical audiences. Throughout my academic work, I regularly cleaned and prepared text data for analysis, corrected mislabeled entries, reviewed model‑generated text, and summarised findings accurately. I frequently produced written explanations, project documentation, and evaluation reports, which strengthened my ability to generate clear, concise, standalone text outputs. These tasks required strong attention to detail, precise written English, and the ability to judge whether text should be fixed, discarded, or relabelled skills directly relevant to AI training and text‑labelling workflows.