Medical Text Annotation for Clinical AI Model Training
Worked on a large-scale clinical NLP project aimed at training AI to extract and understand medical information from unstructured text such as EHRs and clinical notes. Responsibilities included labeling entities like symptoms, diagnoses, medications, and procedures; classifying clinical outcomes; and summarizing patient histories for downstream tasks. Maintained high inter-annotator agreement (>95%) by closely following detailed medical annotation guidelines and participating in regular calibration sessions. The project spanned over 50,000 data points and contributed directly to the training of clinical decision support models used by healthcare institutions.