For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Sairaj Narayankar

Sairaj Narayankar

Gen AI (LLM Fine-Tuning, Prompt Engineer)

INDIA flag
Mumbai, India
$20.00/hrEntry LevelOtherGoogle Cloud Vertex AITelus

Key Skills

Software

Other
Google Cloud Vertex AIGoogle Cloud Vertex AI
TelusTelus

Top Subject Matter

Business Domain Expertise
Natural Language Processing (NLP)
Generative AI

Top Data Types

TextText
ImageImage
AudioAudio

Top Task Types

Prompt Response Writing SFT
RLHF
Bounding Box

Freelancer Overview

Generative AI Intern (LLM Fine-Tuning). Brings 1+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Hugging Face Transformers, Other, and Hugging Face Diffusers. Education includes Bachelor of Science, Mulund College of Commerce (2025) and Higher Secondary Certificate, Trimurti Junior College (2022). AI-training focus includes data types such as Text and Image and labeling workflows including Fine-tuning and Prompt + Response Writing (SFT).

Entry LevelEnglish

Labeling Experience

AI & Data Intern (Prompt Engineering & SFT)

OtherTextPrompt Response Writing SFT
During my AI & Data Internship at Deloitte, I contributed to the development of generative AI tools by writing and refining prompts and responses for use in supervised fine-tuning (SFT) of LLMs. I supported the automation of reporting and client communication flows through curated data sets and response evaluation. My responsibilities included annotating, evaluating, and refining prompt outputs for financial analysis domains. • Participated in the prompt engineering and SFT cycle using proprietary datasets. • Evaluated AI-generated responses for accuracy, context, and compliance. • Collaborated with consultancy teams to iteratively improve training data. • Used proprietary and open-source frameworks for text labeling activities.

During my AI & Data Internship at Deloitte, I contributed to the development of generative AI tools by writing and refining prompts and responses for use in supervised fine-tuning (SFT) of LLMs. I supported the automation of reporting and client communication flows through curated data sets and response evaluation. My responsibilities included annotating, evaluating, and refining prompt outputs for financial analysis domains. • Participated in the prompt engineering and SFT cycle using proprietary datasets. • Evaluated AI-generated responses for accuracy, context, and compliance. • Collaborated with consultancy teams to iteratively improve training data. • Used proprietary and open-source frameworks for text labeling activities.

2025 - 2025

Generative AI Intern (LLM Fine-Tuning)

TextFine Tuning
As a Generative AI Virtual Intern at BCG X, I engaged in fine-tuning transformer-based models for tasks such as text summarization and sentiment analysis. I experimented with prompt engineering to optimize large language model (LLM) performance for business-focused scenarios. My work directly contributed to LLM training cycles and enhancing model output quality. • Involved in supervised model fine-tuning using curated text data. • Designed and tested text prompts to optimize LLMs for summarization and sentiment classification. • Helped develop internal dashboards to visualize AI outcomes and labeling effectiveness. • Hands-on use of Hugging Face Transformers framework for AI model development.

As a Generative AI Virtual Intern at BCG X, I engaged in fine-tuning transformer-based models for tasks such as text summarization and sentiment analysis. I experimented with prompt engineering to optimize large language model (LLM) performance for business-focused scenarios. My work directly contributed to LLM training cycles and enhancing model output quality. • Involved in supervised model fine-tuning using curated text data. • Designed and tested text prompts to optimize LLMs for summarization and sentiment classification. • Helped develop internal dashboards to visualize AI outcomes and labeling effectiveness. • Hands-on use of Hugging Face Transformers framework for AI model development.

2025 - 2025

Text-to-Image Generation Project (Prompt Annotation & Evaluation)

ImagePrompt Response Writing SFT
In the text-to-image generation project using Stable Diffusion, I prepared and annotated domain-specific textual prompts and matched them with generated images for quality and relevance. I recorded variations in prompt design and visual outcomes, thereby refining the dataset for subsequent image generation cycles. My efforts supported both qualitative and quantitative improvements in text-to-image models. • Labeled and matched text prompts to image outputs for model training. • Iteratively experimented with prompts to evaluate image diversity and accuracy. • Used Python and Hugging Face Diffusers in a Google Colab environment. • Created a portfolio of validated, business-relevant AI-generated visuals.

In the text-to-image generation project using Stable Diffusion, I prepared and annotated domain-specific textual prompts and matched them with generated images for quality and relevance. I recorded variations in prompt design and visual outcomes, thereby refining the dataset for subsequent image generation cycles. My efforts supported both qualitative and quantitative improvements in text-to-image models. • Labeled and matched text prompts to image outputs for model training. • Iteratively experimented with prompts to evaluate image diversity and accuracy. • Used Python and Hugging Face Diffusers in a Google Colab environment. • Created a portfolio of validated, business-relevant AI-generated visuals.

2024 - 2024

LLM-Powered Chatbot Project (Data Labeling & Fine-tuning)

TextFine Tuning
I designed and implemented a customer support chatbot using fine-tuned GPT models for handling FAQs and user queries. The process included collecting sample customer interactions, labeling relevant data, and evaluating model responses for correctness and context. This enhanced both the model's training data and its real-world operational accuracy. • Labeled and curated FAQ data and customer conversations for chatbot training. • Tested and refined chatbot prompts and outputs to improve accuracy. • Applied Hugging Face Transformers and Flask for deployment and iteration. • Focused on multi-domain response consistency and user experience.

I designed and implemented a customer support chatbot using fine-tuned GPT models for handling FAQs and user queries. The process included collecting sample customer interactions, labeling relevant data, and evaluating model responses for correctness and context. This enhanced both the model's training data and its real-world operational accuracy. • Labeled and curated FAQ data and customer conversations for chatbot training. • Tested and refined chatbot prompts and outputs to improve accuracy. • Applied Hugging Face Transformers and Flask for deployment and iteration. • Focused on multi-domain response consistency and user experience.

2024 - 2024

Education

M

Mulund College of Commerce

Bachelor of Science, Information Technology

Bachelor of Science
2022 - 2025
T

Trimurti Junior College

Higher Secondary Certificate, Science

Higher Secondary Certificate
2021 - 2022

Work History

D

Deloitte

AI & Data Intern

Mumbai
2025 - 2025
B

BCG X

Generative AI Intern

Mumbai
2025 - 2025