AI Training & Data Annotation (Independent Practice and LLM Technical Reasoning Evaluation)
I evaluated AI-generated technical and STEM solutions for correctness, clarity, and logical consistency in independent and academic projects. My work included providing Reinforcement Learning from Human Feedback (RLHF)-style feedback and conducting qualitative model assessments. I curated and annotated technical datasets to support improved AI performance in engineering problem-solving tasks. • Designed and structured prompts to assess Large Language Model (LLM) reasoning accuracy. • Delivered detailed, rubric-based evaluations of AI-generated explanations. • Annotated and organized engineering content for training data purposes. • Performed qualitative analysis to enhance AI-driven technical solutions.