For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Lucrecia Carrara

Lucrecia Carrara

Multilingual AI Labeling Specialist | Annotation, Evaluation, Prompting

Argentina flagBuenos Aires, Argentina
$20.00/hrIntermediateAppenClickworkerCrowdsource

Key Skills

Software

AppenAppen
ClickworkerClickworker
CrowdSourceCrowdSource
Data Annotation TechData Annotation Tech
LabelboxLabelbox
MindriftMindrift
OneFormaOneForma
Other
Internal/Proprietary Tooling

Top Subject Matter

No subject matter listed

Top Data Types

AudioAudio
ImageImage
TextText

Top Task Types

Audio Recording
Evaluation Rating
Fine Tuning
Text Generation
Text Summarization

Freelancer Overview

Multilingual AI training data specialist with hands-on experience in speech recognition, transcription, search engine evaluation, and prompt engineering. I have contributed to high-impact projects by refining large-scale datasets for AI systems, including training and supporting annotators, optimizing prompts for generative models, and evaluating search quality to enhance algorithm performance. My background in education and linguistics allows me to bring a user-focused, detail-oriented perspective to data annotation. I’ve worked across diverse tasks—ranging from updating transcription guidelines and conducting team training sessions to designing and testing prompts for AI communication. Fluent in English, Spanish, and French, I bring intercultural agility and linguistic sensitivity to every project, ensuring accuracy, clarity, and adaptability in multilingual environments.

IntermediateFrenchEnglishItalianSpanishPortuguese

Labeling Experience

Data Annotation Tech

Data Annotation Project 2

Data Annotation TechTextFine TuningEvaluation Rating
In this project, I evaluated dialogues between annotators and a language model that generated two possible responses to each prompt. Rather than relying on predefined criteria, I was responsible for developing my own evaluation framework to assess the quality of each response. I then applied these criteria to justify my choice of the better response and documented my reasoning in detail. This process played a key role in refining the model’s output and improving its interpretive accuracy.

In this project, I evaluated dialogues between annotators and a language model that generated two possible responses to each prompt. Rather than relying on predefined criteria, I was responsible for developing my own evaluation framework to assess the quality of each response. I then applied these criteria to justify my choice of the better response and documented my reasoning in detail. This process played a key role in refining the model’s output and improving its interpretive accuracy.

2024
Data Annotation Tech

Data Annotation Project 1

Data Annotation TechTextQuestion AnsweringText Summarization
I worked on a large-scale AI project focused on training a language model through adversarial prompt creation. My role involved designing complex prompts across categories such as text extraction, summarization, personification, and both open and closed question answering. These prompts were intentionally crafted to expose weaknesses in the model’s reasoning, interpretation, and output quality. By identifying failure points, I contributed to refining the model's responses and improving its overall performance and reliability.

I worked on a large-scale AI project focused on training a language model through adversarial prompt creation. My role involved designing complex prompts across categories such as text extraction, summarization, personification, and both open and closed question answering. These prompts were intentionally crafted to expose weaknesses in the model’s reasoning, interpretation, and output quality. By identifying failure points, I contributed to refining the model's responses and improving its overall performance and reliability.

2024
Labelbox

Alignerr Project 1

LabelboxAudioAudio Recording
I participated in a speech recognition project for Alignerr, utilizing their Labelbox tool. This project focused on creating audio recordings that incorporated a specific amount of noise. While the speech itself was not scripted, there were suggested topics to guide the content. Each recording had distinct requirements, such as specifying indoor or outdoor settings and varying levels of noise (low, medium, or high).

I participated in a speech recognition project for Alignerr, utilizing their Labelbox tool. This project focused on creating audio recordings that incorporated a specific amount of noise. While the speech itself was not scripted, there were suggested topics to guide the content. Each recording had distinct requirements, such as specifying indoor or outdoor settings and varying levels of noise (low, medium, or high).

2024 - 2024

Outlier Project 1

Internal Proprietary ToolingTextRLHF
In this role, I was in charge of researching, testing, and adjusting prompts for artificial intelligence models. This involved creating prompts across various categories such as summarization, extraction, personification, and both open and closed question answering. My task was to evaluate two different AI responses to these prompts and select the better one, contributing to results optimization. This work combined linguistic understanding, critical thinking, and constant exploration.

In this role, I was in charge of researching, testing, and adjusting prompts for artificial intelligence models. This involved creating prompts across various categories such as summarization, extraction, personification, and both open and closed question answering. My task was to evaluate two different AI responses to these prompts and select the better one, contributing to results optimization. This work combined linguistic understanding, critical thinking, and constant exploration.

2024 - 2024

TransPerfect Project 1

OtherTextEvaluation Rating
In this role, I contributed to a high-impact project focused on enhancing web search algorithms. My responsibilities included analyzing and classifying data according to strict guidelines, conducting web searches to evaluate search result quality, and collaborating with team members to refine evaluation criteria. This work played a key part in improving the AI's accuracy and responsiveness to user queries, specifically within Apple's labeling tool for Siri

In this role, I contributed to a high-impact project focused on enhancing web search algorithms. My responsibilities included analyzing and classifying data according to strict guidelines, conducting web searches to evaluate search result quality, and collaborating with team members to refine evaluation criteria. This work played a key part in improving the AI's accuracy and responsiveness to user queries, specifically within Apple's labeling tool for Siri

2022 - 2023

Education

L

Leiden University

Comparative Linguistics

Not specified
2022 - 2023
U

Universidad Católica Argentina

Teaching Spanish As A Second Language

Not specified
2019 - 2018

Work History

I

italki

Spanish Online Instructor

N/A
2019 - 2021
E

Expanish

Spanish Instructor

Buenos Aires
2018 - 2019