AI Evaluation Data Structuring - Hackathon Submission Assessment
Worked on converting subjective enterprise evaluation criteria into structured, domain-specific scoring parameters for an AI-assisted assessment system. For each domain, health, blockchain, fintech, sustainability, defined explicit rubrics that broke down vague judging criteria (e.g. "innovation", "feasibility") into granular, scoreable attributes. These structured parameters were used to train and guide the RAG-powered evaluation pipeline to assess 18,500+ hackathon submissions consistently across multiple rounds. The work involved close collaboration with domain experts and enterprise clients (ISRO, Accenture) to ensure the structured criteria accurately reflected real-world evaluation standards.