For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Zenith Zinzuvadia

Zenith Zinzuvadia

Security Analyst - Information Technology

INDIA flag
Gandhinagar, India
$20.00/hrExpertLabel Studio

Key Skills

Software

Label StudioLabel Studio

Top Subject Matter

No subject matter listed

Top Data Types

TextText

Top Label Types

Classification
Text Generation
Text Summarization

Freelancer Overview

I have completed my B.Tech in Information and Communication Technology, where I have developed strong technical skills in Python, C++, and Bash, along with hands-on experience in data analysis and system monitoring. My projects, such as implementing a SOC log monitoring and threat detection system using Splunk SIEM, involved ingesting, analyzing, and visualizing large sets of authentication log data—skills directly relevant to data labeling and annotation for AI training. I am proficient in using developer tools like VSCode, Linux Shell, and have worked with data-centric security tools such as Nmap and Wireshark. My experience documenting scan results and observations has strengthened my attention to detail, which I bring to tasks involving data quality and annotation accuracy. I am a collaborative team player with strong problem-solving abilities, and I am eager to contribute to the creation and curation of high-quality AI training datasets.

ExpertEnglishHindi

Labeling Experience

Label Studio

NLP Data Annotation and LLM Response Evaluation Project

Label StudioComputer Code ProgrammingGeocodingComputer Programming Coding
Worked on an academic and self-driven data annotation and AI evaluation project focused on improving the quality and reliability of language model outputs. Labeled and reviewed text datasets for tasks such as intent classification, sentiment analysis, and response correctness using structured guidelines. Evaluated LLM-generated responses based on accuracy, relevance, clarity, and consistency with expected outputs. Used structured formats like JSON to store annotations and evaluation results. Performed quality checks by reviewing edge cases and correcting inconsistent labels to improve dataset reliability. Collaborated with peers to refine labeling guidelines and ensure consistency across annotations. Gained hands-on experience in prompt testing, structured output validation, and basic tool-assisted evaluation workflows using Python.

Worked on an academic and self-driven data annotation and AI evaluation project focused on improving the quality and reliability of language model outputs. Labeled and reviewed text datasets for tasks such as intent classification, sentiment analysis, and response correctness using structured guidelines. Evaluated LLM-generated responses based on accuracy, relevance, clarity, and consistency with expected outputs. Used structured formats like JSON to store annotations and evaluation results. Performed quality checks by reviewing edge cases and correcting inconsistent labels to improve dataset reliability. Collaborated with peers to refine labeling guidelines and ensure consistency across annotations. Gained hands-on experience in prompt testing, structured output validation, and basic tool-assisted evaluation workflows using Python.

2024 - 2025
Label Studio

AI Data Annotation & Model Evaluation – Practice Project

Label StudioTextClassificationText Generation
Worked on a self-directed AI data annotation and evaluation project focused on improving the quality of text-based machine learning models. Performed tasks such as labeling user queries, classifying responses, identifying incorrect or unsafe outputs, and reviewing model-generated answers for clarity, relevance, and policy compliance. Created structured datasets for training and validation, using spreadsheet-based workflows and Python scripts for basic data cleaning and formatting. Applied consistency checks and quality assurance methods to ensure high-accuracy annotations. This project helped develop strong attention to detail, prompt analysis skills, and an understanding of how human feedback improves large language model performance.

Worked on a self-directed AI data annotation and evaluation project focused on improving the quality of text-based machine learning models. Performed tasks such as labeling user queries, classifying responses, identifying incorrect or unsafe outputs, and reviewing model-generated answers for clarity, relevance, and policy compliance. Created structured datasets for training and validation, using spreadsheet-based workflows and Python scripts for basic data cleaning and formatting. Applied consistency checks and quality assurance methods to ensure high-accuracy annotations. This project helped develop strong attention to detail, prompt analysis skills, and an understanding of how human feedback improves large language model performance.

2024 - 2024

Education

D

Dhirubhai Ambani Institute of Information and Communication Technology

Bachelor of Technology, Information and Communication Technology

Bachelor of Technology
2022 - 2025

Work History

C

Concours Event

Sports Event Coordinator

Gandhinagar
2025 - 2025
C

Concours Event

Associate Member

Gandhinagar
2024 - 2024