Independent AI Builder & Experimenter (AI Model Evaluator/Annotator)
Designed, built, and operated a local large language model setup for daily AI model output testing and response evaluation. Assessed model responses for accuracy, coherence, and utility while identifying hallucinations and refining prompts. Developed expertise by applying recent AI coursework in hands-on local LLM experimentation. • Independently compared outputs from multiple generative text models • Evaluated and ranked responses to improve model performance • Tested prompt variations and logged results systematically • Provided structured feedback to guide model improvement