Advanced Video Data Annotation for Surgical AI and LLM Integration
This project involved annotating a large-scale dataset of minimally invasive surgical videos to train and fine-tune a next-generation AI model. The core objective was twofold: first, to enable real-time surgical instrument and anatomy detection for computer-assisted intervention; and second, to create a structured dataset for a specialized Healthcare LLM to generate accurate surgical reports and post-operative summaries. Specific Data Labeling Tasks Performed: Instrument Detection & Tracking: Drew precise bounding boxes around surgical instruments (e.g., scalpels, forceps, clamps) in every frame, ensuring consistent tracking IDs to follow their movement and usage throughout procedures. Anatomy Segmentation: Used polylines to meticulously trace critical anatomical structures and tissue types, defining zones of operation and potential areas of risk. Action Recognition & Classification: Tagged video segments with specific surgical actions (e.g., "grasping," "cutting," "cauterizing," "