LLM evaluation in French
In this project, I evaluated the performance of a Large Language Model (LLM) for text generation tasks in French. The scope involved annotating and reviewing generated content for accuracy, fluency, and coherence. My role was to assess the model’s ability to produce natural language responses and suggest improvements. I worked with a variety of texts, ensuring high-quality feedback for model fine-tuning. This project involved the use of custom software tools and adhered to strict guidelines for quality and consistency in the LLM-generated content.