AI Technical Output Evaluator (Freelance)
I evaluated and rewrote technical AI-generated outputs for accuracy and adherence to instructions, focusing on logical coherence and STEM-specific terminology. My work involved reviewing over 200 documents, closely mirroring task flows found on AI training and RLHF annotation platforms. The process involved analyzing, critiquing, and suggesting improvements for language models' outputs in both English and Korean. • Reviewed and corrected technical, instruction-following outputs for accuracy. • Conducted bilingual (Korean-English) assessment of technical translations for meaning preservation. • Provided feedback on logical structure, terminology, and compliance with client requirements. • Mimicked RLHF and AI annotation workflows commonly found in remote AI data training environments.