Japanese Linguistic Data Annotation and AI Output Evaluation
I worked on multiple text-based AI training projects involving Japanese linguistic annotation, output evaluation, and prompt-response task creation. My role included evaluating machine-generated text for accuracy, fluency, tone, and cultural appropriateness, as well as classifying sentence structures and correcting grammatical or contextual errors. I also contributed to RLHF and SFT workflows by ranking model responses, generating improved outputs, and writing prompts aligned with educational and practical use cases. These projects ranged from several thousand to tens of thousands of lines of text. I followed strict annotation guidelines, applied terminology consistency, and conducted self-QC to maintain high precision. My background as a professional translator and Japanese language educator enabled me to deliver reliable linguistic judgments and ensure high-quality datasets for model improvement.