For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Ishitha Sridhar

Ishitha Sridhar

Data Analyst - AI Output Evaluation & Text Classification

GERMANY flag
Berlin, Germany
$25.00/hrIntermediateOpencv AI Kit OakInternal Proprietary Tooling

Key Skills

Software

OpenCV AI Kit (OAK)OpenCV AI Kit (OAK)
Internal/Proprietary Tooling

Top Subject Matter

No subject matter listed

Top Data Types

DocumentDocument
TextText

Top Label Types

Classification
Evaluation Rating

Freelancer Overview

I am a data science graduate with over two years of experience supporting data analysis, validation, and reporting in enterprise environments. My background includes building and automating data validation pipelines using Python and SQL, developing tools to ensure high data quality, and investigating anomalies to support robust decision-making. I have worked extensively with data cleaning, transformation, and ETL processes, and have hands-on experience in designing frameworks to evaluate large language models for NLP tasks during my master's thesis. My projects span structured data processing, anomaly detection, and dashboard development using tools like Power BI and Tableau. I am passionate about ensuring the accuracy and integrity of AI training data, and I thrive in cross-functional teams where I can bridge technical and business needs to deliver reliable, well annotated datasets.

IntermediateEnglishGerman

Labeling Experience

Evaluating Instruction Following Capabilities of Large Language Models on Structured & Unstructured Tasks

Internal Proprietary ToolingTextClassificationEvaluation Rating
Evaluated large language model (LLM) outputs against benchmark text datasets to assess instruction adherence, response accuracy, and overall quality. Applied structured rating criteria and quantitative metrics such as accuracy and F1-score to compare model performance across different prompt types. Built Python-based validation workflows for preprocessing, scoring, and error analysis to ensure consistent, repeatable, and high-quality evaluation processes.

Evaluated large language model (LLM) outputs against benchmark text datasets to assess instruction adherence, response accuracy, and overall quality. Applied structured rating criteria and quantitative metrics such as accuracy and F1-score to compare model performance across different prompt types. Built Python-based validation workflows for preprocessing, scoring, and error analysis to ensure consistent, repeatable, and high-quality evaluation processes.

2025 - 2025

Education

U

University of Europe for Applied Sciences

Master of Science, Data Science

Master of Science
2023 - 2025
D

Dayananda Sagar University

Bachelor of Technology, Computer Science and Engineering

Bachelor of Technology
2017 - 2021

Work History

A

Accenture Solutions

Software Engineer

Bengaluru
2021 - 2023
H

Hindustan Aeronautics Limited

Intern

Bengaluru
2021 - 2021